Sep 30 17:03:22 crc systemd[1]: Starting Kubernetes Kubelet... Sep 30 17:03:23 crc restorecon[4584]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:03:23 crc restorecon[4584]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 30 17:03:24 crc kubenswrapper[4821]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.485599 4821 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490862 4821 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490909 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490915 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490920 4821 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490926 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490931 4821 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490939 4821 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490948 4821 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490955 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490961 4821 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490967 4821 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490971 4821 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490977 4821 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490982 4821 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490987 4821 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490992 4821 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.490996 4821 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491001 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491008 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491012 4821 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491017 4821 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491022 4821 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491028 4821 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491033 4821 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491038 4821 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491043 4821 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491047 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491051 4821 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491056 4821 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491060 4821 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491065 4821 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491069 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491073 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491101 4821 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491106 4821 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491111 4821 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491116 4821 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491120 4821 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491127 4821 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491132 4821 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491156 4821 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491164 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491171 4821 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491177 4821 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491183 4821 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491188 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491194 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491198 4821 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491203 4821 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491209 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491213 4821 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491218 4821 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491223 4821 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491229 4821 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491234 4821 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491239 4821 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491244 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491249 4821 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491253 4821 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491258 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491264 4821 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491269 4821 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491273 4821 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491278 4821 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491283 4821 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491289 4821 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491293 4821 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491301 4821 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491306 4821 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491311 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.491317 4821 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492145 4821 flags.go:64] FLAG: --address="0.0.0.0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492166 4821 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492175 4821 flags.go:64] FLAG: --anonymous-auth="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492181 4821 flags.go:64] FLAG: --application-metrics-count-limit="100" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492187 4821 flags.go:64] FLAG: --authentication-token-webhook="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492192 4821 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492200 4821 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492205 4821 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492210 4821 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492215 4821 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492220 4821 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492225 4821 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492229 4821 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492234 4821 flags.go:64] FLAG: --cgroup-root="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492238 4821 flags.go:64] FLAG: --cgroups-per-qos="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492243 4821 flags.go:64] FLAG: --client-ca-file="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492248 4821 flags.go:64] FLAG: --cloud-config="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492253 4821 flags.go:64] FLAG: --cloud-provider="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492257 4821 flags.go:64] FLAG: --cluster-dns="[]" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492265 4821 flags.go:64] FLAG: --cluster-domain="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492269 4821 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492276 4821 flags.go:64] FLAG: --config-dir="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492282 4821 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492288 4821 flags.go:64] FLAG: --container-log-max-files="5" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492295 4821 flags.go:64] FLAG: --container-log-max-size="10Mi" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492300 4821 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492305 4821 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492324 4821 flags.go:64] FLAG: --containerd-namespace="k8s.io" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492329 4821 flags.go:64] FLAG: --contention-profiling="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492334 4821 flags.go:64] FLAG: --cpu-cfs-quota="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492338 4821 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492342 4821 flags.go:64] FLAG: --cpu-manager-policy="none" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492352 4821 flags.go:64] FLAG: --cpu-manager-policy-options="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492359 4821 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492370 4821 flags.go:64] FLAG: --enable-controller-attach-detach="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492377 4821 flags.go:64] FLAG: --enable-debugging-handlers="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492383 4821 flags.go:64] FLAG: --enable-load-reader="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492389 4821 flags.go:64] FLAG: --enable-server="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492394 4821 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492402 4821 flags.go:64] FLAG: --event-burst="100" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492407 4821 flags.go:64] FLAG: --event-qps="50" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492411 4821 flags.go:64] FLAG: --event-storage-age-limit="default=0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492416 4821 flags.go:64] FLAG: --event-storage-event-limit="default=0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492421 4821 flags.go:64] FLAG: --eviction-hard="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492428 4821 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492433 4821 flags.go:64] FLAG: --eviction-minimum-reclaim="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492439 4821 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492444 4821 flags.go:64] FLAG: --eviction-soft="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492449 4821 flags.go:64] FLAG: --eviction-soft-grace-period="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492455 4821 flags.go:64] FLAG: --exit-on-lock-contention="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492459 4821 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492464 4821 flags.go:64] FLAG: --experimental-mounter-path="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492468 4821 flags.go:64] FLAG: --fail-cgroupv1="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492473 4821 flags.go:64] FLAG: --fail-swap-on="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492477 4821 flags.go:64] FLAG: --feature-gates="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492482 4821 flags.go:64] FLAG: --file-check-frequency="20s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492487 4821 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492491 4821 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492496 4821 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492500 4821 flags.go:64] FLAG: --healthz-port="10248" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492505 4821 flags.go:64] FLAG: --help="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492509 4821 flags.go:64] FLAG: --hostname-override="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492513 4821 flags.go:64] FLAG: --housekeeping-interval="10s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492517 4821 flags.go:64] FLAG: --http-check-frequency="20s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492521 4821 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492525 4821 flags.go:64] FLAG: --image-credential-provider-config="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492529 4821 flags.go:64] FLAG: --image-gc-high-threshold="85" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492533 4821 flags.go:64] FLAG: --image-gc-low-threshold="80" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492537 4821 flags.go:64] FLAG: --image-service-endpoint="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492541 4821 flags.go:64] FLAG: --kernel-memcg-notification="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492545 4821 flags.go:64] FLAG: --kube-api-burst="100" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492549 4821 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492554 4821 flags.go:64] FLAG: --kube-api-qps="50" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492560 4821 flags.go:64] FLAG: --kube-reserved="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492565 4821 flags.go:64] FLAG: --kube-reserved-cgroup="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492569 4821 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492573 4821 flags.go:64] FLAG: --kubelet-cgroups="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492577 4821 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492581 4821 flags.go:64] FLAG: --lock-file="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492585 4821 flags.go:64] FLAG: --log-cadvisor-usage="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492589 4821 flags.go:64] FLAG: --log-flush-frequency="5s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492594 4821 flags.go:64] FLAG: --log-json-info-buffer-size="0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492601 4821 flags.go:64] FLAG: --log-json-split-stream="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492605 4821 flags.go:64] FLAG: --log-text-info-buffer-size="0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492609 4821 flags.go:64] FLAG: --log-text-split-stream="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492614 4821 flags.go:64] FLAG: --logging-format="text" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492618 4821 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492623 4821 flags.go:64] FLAG: --make-iptables-util-chains="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492627 4821 flags.go:64] FLAG: --manifest-url="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492631 4821 flags.go:64] FLAG: --manifest-url-header="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492637 4821 flags.go:64] FLAG: --max-housekeeping-interval="15s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492642 4821 flags.go:64] FLAG: --max-open-files="1000000" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492647 4821 flags.go:64] FLAG: --max-pods="110" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492651 4821 flags.go:64] FLAG: --maximum-dead-containers="-1" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492656 4821 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492660 4821 flags.go:64] FLAG: --memory-manager-policy="None" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492664 4821 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492669 4821 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492673 4821 flags.go:64] FLAG: --node-ip="192.168.126.11" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492677 4821 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492690 4821 flags.go:64] FLAG: --node-status-max-images="50" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492695 4821 flags.go:64] FLAG: --node-status-update-frequency="10s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492701 4821 flags.go:64] FLAG: --oom-score-adj="-999" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492705 4821 flags.go:64] FLAG: --pod-cidr="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492710 4821 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492718 4821 flags.go:64] FLAG: --pod-manifest-path="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492724 4821 flags.go:64] FLAG: --pod-max-pids="-1" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492729 4821 flags.go:64] FLAG: --pods-per-core="0" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492734 4821 flags.go:64] FLAG: --port="10250" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492739 4821 flags.go:64] FLAG: --protect-kernel-defaults="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492743 4821 flags.go:64] FLAG: --provider-id="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492758 4821 flags.go:64] FLAG: --qos-reserved="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492763 4821 flags.go:64] FLAG: --read-only-port="10255" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492767 4821 flags.go:64] FLAG: --register-node="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492772 4821 flags.go:64] FLAG: --register-schedulable="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492776 4821 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492784 4821 flags.go:64] FLAG: --registry-burst="10" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492788 4821 flags.go:64] FLAG: --registry-qps="5" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492793 4821 flags.go:64] FLAG: --reserved-cpus="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492797 4821 flags.go:64] FLAG: --reserved-memory="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492803 4821 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492807 4821 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492812 4821 flags.go:64] FLAG: --rotate-certificates="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492816 4821 flags.go:64] FLAG: --rotate-server-certificates="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492820 4821 flags.go:64] FLAG: --runonce="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492824 4821 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492828 4821 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492833 4821 flags.go:64] FLAG: --seccomp-default="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492837 4821 flags.go:64] FLAG: --serialize-image-pulls="true" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492841 4821 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492850 4821 flags.go:64] FLAG: --storage-driver-db="cadvisor" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492855 4821 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492859 4821 flags.go:64] FLAG: --storage-driver-password="root" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492863 4821 flags.go:64] FLAG: --storage-driver-secure="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492868 4821 flags.go:64] FLAG: --storage-driver-table="stats" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492871 4821 flags.go:64] FLAG: --storage-driver-user="root" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492876 4821 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492880 4821 flags.go:64] FLAG: --sync-frequency="1m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492885 4821 flags.go:64] FLAG: --system-cgroups="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492889 4821 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492895 4821 flags.go:64] FLAG: --system-reserved-cgroup="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492899 4821 flags.go:64] FLAG: --tls-cert-file="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.492903 4821 flags.go:64] FLAG: --tls-cipher-suites="[]" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.493971 4821 flags.go:64] FLAG: --tls-min-version="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.493980 4821 flags.go:64] FLAG: --tls-private-key-file="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.493985 4821 flags.go:64] FLAG: --topology-manager-policy="none" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.493991 4821 flags.go:64] FLAG: --topology-manager-policy-options="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.493996 4821 flags.go:64] FLAG: --topology-manager-scope="container" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494002 4821 flags.go:64] FLAG: --v="2" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494010 4821 flags.go:64] FLAG: --version="false" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494017 4821 flags.go:64] FLAG: --vmodule="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494023 4821 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494029 4821 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494176 4821 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494185 4821 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494191 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494197 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494203 4821 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494208 4821 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494214 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494219 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494224 4821 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494231 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494236 4821 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494241 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494245 4821 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494249 4821 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494254 4821 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494258 4821 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494263 4821 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494268 4821 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494272 4821 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494276 4821 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494280 4821 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494286 4821 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494292 4821 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494296 4821 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494301 4821 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494307 4821 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494312 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494319 4821 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494323 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494328 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494332 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494336 4821 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494340 4821 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494345 4821 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494349 4821 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494354 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494358 4821 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494362 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494366 4821 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494370 4821 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494376 4821 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494383 4821 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494389 4821 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494395 4821 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494400 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494405 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494410 4821 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494414 4821 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494419 4821 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494426 4821 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494432 4821 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494437 4821 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494441 4821 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494445 4821 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494450 4821 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494454 4821 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494459 4821 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494463 4821 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494467 4821 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494472 4821 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494476 4821 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494480 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494485 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494490 4821 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494495 4821 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494499 4821 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494504 4821 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494508 4821 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494513 4821 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494517 4821 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.494521 4821 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.494536 4821 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.503685 4821 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.503727 4821 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503826 4821 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503846 4821 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503852 4821 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503858 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503865 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503871 4821 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503876 4821 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503881 4821 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503886 4821 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503892 4821 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503899 4821 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503908 4821 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503913 4821 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503932 4821 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503938 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503943 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503948 4821 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503953 4821 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503958 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503963 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503969 4821 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503974 4821 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503979 4821 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503984 4821 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503988 4821 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.503995 4821 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504002 4821 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504009 4821 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504015 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504021 4821 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504027 4821 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504033 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504040 4821 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504045 4821 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504050 4821 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504056 4821 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504061 4821 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504066 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504072 4821 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504077 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504104 4821 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504110 4821 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504115 4821 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504121 4821 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504126 4821 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504131 4821 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504136 4821 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504142 4821 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504147 4821 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504152 4821 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504159 4821 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504165 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504171 4821 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504176 4821 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504183 4821 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504188 4821 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504198 4821 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504204 4821 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504210 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504215 4821 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504219 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504225 4821 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504231 4821 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504235 4821 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504240 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504244 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504249 4821 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504253 4821 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504258 4821 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504262 4821 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504266 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.504274 4821 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504426 4821 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504437 4821 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504442 4821 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504447 4821 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504451 4821 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504456 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504461 4821 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504466 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504471 4821 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504475 4821 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504480 4821 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504485 4821 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504489 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504496 4821 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504502 4821 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504507 4821 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504512 4821 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504518 4821 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504524 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504529 4821 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504533 4821 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504538 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504543 4821 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504548 4821 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504553 4821 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504560 4821 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504565 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504571 4821 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504578 4821 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504583 4821 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504589 4821 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504594 4821 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504600 4821 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504604 4821 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504610 4821 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504618 4821 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504623 4821 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504628 4821 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504633 4821 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504637 4821 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504642 4821 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504646 4821 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504651 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504656 4821 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504661 4821 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504665 4821 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504670 4821 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504675 4821 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504679 4821 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504685 4821 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504690 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504695 4821 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504700 4821 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504705 4821 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504710 4821 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504715 4821 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504720 4821 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504724 4821 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504729 4821 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504733 4821 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504738 4821 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504744 4821 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504749 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504754 4821 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504758 4821 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504762 4821 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504767 4821 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504771 4821 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504776 4821 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504781 4821 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.504785 4821 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.504792 4821 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.507574 4821 server.go:940] "Client rotation is on, will bootstrap in background" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.514015 4821 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.514832 4821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.516550 4821 server.go:997] "Starting client certificate rotation" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.516584 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.518348 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 14:48:47.89638184 +0000 UTC Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.518461 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2421h45m23.377923817s for next certificate rotation Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.543743 4821 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.545823 4821 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.562497 4821 log.go:25] "Validated CRI v1 runtime API" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.598907 4821 log.go:25] "Validated CRI v1 image API" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.601113 4821 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.607420 4821 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-09-30-16-57-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.607446 4821 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618129 4821 manager.go:217] Machine: {Timestamp:2025-09-30 17:03:24.615802324 +0000 UTC m=+0.520848288 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3c12aacb-94c6-4a5c-b29c-6c2e5c30c341 BootID:2052ba73-7f50-4844-a6ef-43008c5ca24e Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b1:36:31 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b1:36:31 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ac:d2:15 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:28:2e:9d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:38:b0:d4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:53:5e:ba Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:db:c9:8f:71:8d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ae:d4:b9:96:2b:e8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618306 4821 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618481 4821 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618763 4821 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618916 4821 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.618947 4821 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.619169 4821 topology_manager.go:138] "Creating topology manager with none policy" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.619178 4821 container_manager_linux.go:303] "Creating device plugin manager" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.619596 4821 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.619643 4821 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.620492 4821 state_mem.go:36] "Initialized new in-memory state store" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.620861 4821 server.go:1245] "Using root directory" path="/var/lib/kubelet" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.623994 4821 kubelet.go:418] "Attempting to sync node with API server" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.624015 4821 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.624029 4821 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.624042 4821 kubelet.go:324] "Adding apiserver pod source" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.624054 4821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.630933 4821 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.631992 4821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.632993 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.633147 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.632999 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.633231 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.634981 4821 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636699 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636742 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636757 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636772 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636792 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636804 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636816 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636835 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636849 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636862 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636895 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.636907 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.637775 4821 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.638403 4821 server.go:1280] "Started kubelet" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.639583 4821 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.639591 4821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 30 17:03:24 crc systemd[1]: Started Kubernetes Kubelet. Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.640455 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.640951 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.641513 4821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.641525 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:14:55.114976254 +0000 UTC Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.641542 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 977h11m30.473436049s for next certificate rotation Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.641561 4821 volume_manager.go:287] "The desired_state_of_world populator starts" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.641567 4821 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.642260 4821 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.642349 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.642412 4821 server.go:460] "Adding debug handlers to kubelet server" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.642779 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.642994 4821 factory.go:55] Registering systemd factory Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.643012 4821 factory.go:221] Registration of the systemd container factory successfully Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.643280 4821 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.643501 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.643556 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644171 4821 factory.go:153] Registering CRI-O factory Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644188 4821 factory.go:221] Registration of the crio container factory successfully Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644240 4821 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644264 4821 factory.go:103] Registering Raw factory Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644278 4821 manager.go:1196] Started watching for new ooms in manager Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.644795 4821 manager.go:319] Starting recovery of all containers Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.646927 4821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186a1e3205ef4927 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-30 17:03:24.638365991 +0000 UTC m=+0.543411965,LastTimestamp:2025-09-30 17:03:24.638365991 +0000 UTC m=+0.543411965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664210 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664470 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664484 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664495 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664507 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664517 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664531 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664542 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664558 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664567 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664601 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664616 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664625 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664639 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664649 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664662 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664678 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664691 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664712 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664727 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664746 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664761 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664775 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664790 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664801 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664812 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664826 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664841 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664850 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664863 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664873 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664887 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664902 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664913 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664925 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664935 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664948 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664959 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664968 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664979 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.664990 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665001 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665031 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665042 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665053 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665065 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665090 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665100 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665115 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665128 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665139 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665152 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665168 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665182 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665195 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665206 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665220 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665229 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665242 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665252 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665262 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665273 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665284 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665295 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665305 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665314 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665327 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665338 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665352 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665360 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665368 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665380 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665389 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665400 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665409 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665420 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665431 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665441 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665453 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665464 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665479 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665491 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665505 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665516 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665527 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665538 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665331 4821 manager.go:324] Recovery completed Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665550 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665609 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665624 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665638 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665648 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665658 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665670 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665681 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665692 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665700 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665709 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665720 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665731 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665749 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665760 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665771 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.665784 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666390 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666411 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666422 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666432 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666441 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666451 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666461 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666470 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666481 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666491 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666501 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666510 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666519 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666528 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666538 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666546 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666557 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666566 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666575 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666585 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666594 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666603 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666612 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666620 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666629 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666639 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666648 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666656 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666666 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666675 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666685 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.666696 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668379 4821 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668408 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668423 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668443 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668457 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668471 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668483 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668495 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668508 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668519 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668531 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668541 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668551 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668560 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668569 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668578 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668587 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668596 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668605 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668614 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668622 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668631 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668639 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668648 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668657 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668666 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668674 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668683 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668691 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668700 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668709 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668720 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668731 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668741 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668751 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668761 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668772 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668786 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668799 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668810 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668822 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668835 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668849 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668875 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668887 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668898 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668907 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668917 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668928 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668937 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668946 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668956 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668965 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668974 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668983 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.668993 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669001 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669011 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669020 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669029 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669039 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669049 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669057 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669066 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669093 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669103 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669113 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669124 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669132 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669142 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669151 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669160 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669172 4821 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669182 4821 reconstruct.go:97] "Volume reconstruction finished" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.669191 4821 reconciler.go:26] "Reconciler: start to sync state" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.676949 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.679230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.679267 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.679277 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.681825 4821 cpu_manager.go:225] "Starting CPU manager" policy="none" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.681844 4821 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.681862 4821 state_mem.go:36] "Initialized new in-memory state store" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.698641 4821 policy_none.go:49] "None policy: Start" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.702372 4821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.703444 4821 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.703475 4821 state_mem.go:35] "Initializing new in-memory state store" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.705699 4821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.705740 4821 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.705773 4821 kubelet.go:2335] "Starting kubelet main sync loop" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.705825 4821 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 30 17:03:24 crc kubenswrapper[4821]: W0930 17:03:24.707241 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.707355 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.742451 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.759202 4821 manager.go:334] "Starting Device Plugin manager" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.759285 4821 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.759300 4821 server.go:79] "Starting device plugin registration server" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.759818 4821 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.759841 4821 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.760024 4821 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.760190 4821 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.760200 4821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.768826 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.806241 4821 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.806324 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807174 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807204 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807323 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807512 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807570 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807917 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.807926 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808072 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808212 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808239 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808648 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808673 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808806 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.808904 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809004 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809045 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809631 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809641 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809786 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809924 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.809947 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.810230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.810246 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.810254 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.811646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.811669 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.811680 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812717 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812839 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.812857 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.813465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.813485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.813507 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.844294 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.860487 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.861493 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.861525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.861538 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.861559 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:24 crc kubenswrapper[4821]: E0930 17:03:24.861979 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870723 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870750 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870770 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870823 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870839 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.870952 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871017 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871093 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871149 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871195 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871229 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871253 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871300 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.871339 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972212 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972536 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972623 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972713 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972784 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972686 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972631 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972788 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972831 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973100 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973045 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973162 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973187 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973207 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973255 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973277 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973320 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973352 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973397 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973423 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973501 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.972386 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973569 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973598 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973612 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973625 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973640 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973653 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973664 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:24 crc kubenswrapper[4821]: I0930 17:03:24.973675 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.062481 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.063800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.063987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.064002 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.064117 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.064545 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.130558 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.148974 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.173337 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.184400 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-623e266c3938fdd5fa8c3018f3115dc326aaa3c0ffc2ce2c876129b7efc6e1d9 WatchSource:0}: Error finding container 623e266c3938fdd5fa8c3018f3115dc326aaa3c0ffc2ce2c876129b7efc6e1d9: Status 404 returned error can't find the container with id 623e266c3938fdd5fa8c3018f3115dc326aaa3c0ffc2ce2c876129b7efc6e1d9 Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.191606 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.194373 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-be674029b3dd9b3214fd94c622e856a1cda3fc6a65f718a998c92e3e75bc9e8b WatchSource:0}: Error finding container be674029b3dd9b3214fd94c622e856a1cda3fc6a65f718a998c92e3e75bc9e8b: Status 404 returned error can't find the container with id be674029b3dd9b3214fd94c622e856a1cda3fc6a65f718a998c92e3e75bc9e8b Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.200839 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.208729 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-802df38ca0ae6db9ec11d630635bbe7b638d3f3bfe7576d6299773e1b166c6e9 WatchSource:0}: Error finding container 802df38ca0ae6db9ec11d630635bbe7b638d3f3bfe7576d6299773e1b166c6e9: Status 404 returned error can't find the container with id 802df38ca0ae6db9ec11d630635bbe7b638d3f3bfe7576d6299773e1b166c6e9 Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.209927 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e38ae3cc28b001b79f1d5e09b4f734f54d30b17f36082cb428a074d87e10d15b WatchSource:0}: Error finding container e38ae3cc28b001b79f1d5e09b4f734f54d30b17f36082cb428a074d87e10d15b: Status 404 returned error can't find the container with id e38ae3cc28b001b79f1d5e09b4f734f54d30b17f36082cb428a074d87e10d15b Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.217800 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7b8101d952233d127f9e6b8ce64b14508b644ee4571d17226a050a1d8fa22fdd WatchSource:0}: Error finding container 7b8101d952233d127f9e6b8ce64b14508b644ee4571d17226a050a1d8fa22fdd: Status 404 returned error can't find the container with id 7b8101d952233d127f9e6b8ce64b14508b644ee4571d17226a050a1d8fa22fdd Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.244929 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.465051 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.466583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.466633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.466646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.466673 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.467224 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.555754 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.555873 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.642817 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.709548 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e38ae3cc28b001b79f1d5e09b4f734f54d30b17f36082cb428a074d87e10d15b"} Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.711583 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"802df38ca0ae6db9ec11d630635bbe7b638d3f3bfe7576d6299773e1b166c6e9"} Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.712448 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"623e266c3938fdd5fa8c3018f3115dc326aaa3c0ffc2ce2c876129b7efc6e1d9"} Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.713275 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"be674029b3dd9b3214fd94c622e856a1cda3fc6a65f718a998c92e3e75bc9e8b"} Sep 30 17:03:25 crc kubenswrapper[4821]: I0930 17:03:25.714268 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7b8101d952233d127f9e6b8ce64b14508b644ee4571d17226a050a1d8fa22fdd"} Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.815060 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.816213 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:25 crc kubenswrapper[4821]: W0930 17:03:25.832174 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:25 crc kubenswrapper[4821]: E0930 17:03:25.832234 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:26 crc kubenswrapper[4821]: W0930 17:03:26.019490 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:26 crc kubenswrapper[4821]: E0930 17:03:26.019575 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:26 crc kubenswrapper[4821]: E0930 17:03:26.045889 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.268014 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.270729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.270765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.270774 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.270800 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:26 crc kubenswrapper[4821]: E0930 17:03:26.271602 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.643055 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.718353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.718644 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.718729 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.718807 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.718962 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.719828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.720096 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.720258 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.722091 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835" exitCode=0 Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.722150 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.722376 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.723198 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.723218 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.723226 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.724015 4821 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a5559d4a64c8e01a814c11668184c91f0920ef6699683d9770d0057a4b4d7189" exitCode=0 Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.724204 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a5559d4a64c8e01a814c11668184c91f0920ef6699683d9770d0057a4b4d7189"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.724315 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.724321 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.725464 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.725483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.725492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.726338 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.726365 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.726374 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.728042 4821 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3" exitCode=0 Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.728106 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.728471 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.729763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.729781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.729793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.731785 4821 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37" exitCode=0 Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.731883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37"} Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.732078 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.733178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.733323 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:26 crc kubenswrapper[4821]: I0930 17:03:26.733430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.642788 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:27 crc kubenswrapper[4821]: E0930 17:03:27.647281 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.741135 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.741201 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.741215 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.741231 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.744214 4821 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c31c37871c436c8983c26630c277614dca68c504ba3cdd1bf7621269a6e2b099" exitCode=0 Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.744279 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c31c37871c436c8983c26630c277614dca68c504ba3cdd1bf7621269a6e2b099"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.744352 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.745455 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.745485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.745496 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.752264 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.752369 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.753531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.753573 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.753585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.755914 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.755944 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.755956 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43"} Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.755996 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.756003 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758743 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758779 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758922 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758958 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.758975 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.872227 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.873623 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.873664 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.873674 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:27 crc kubenswrapper[4821]: I0930 17:03:27.873709 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:27 crc kubenswrapper[4821]: E0930 17:03:27.874322 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Sep 30 17:03:28 crc kubenswrapper[4821]: W0930 17:03:28.051039 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:28 crc kubenswrapper[4821]: E0930 17:03:28.051220 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:28 crc kubenswrapper[4821]: W0930 17:03:28.067286 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:28 crc kubenswrapper[4821]: E0930 17:03:28.067354 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:28 crc kubenswrapper[4821]: W0930 17:03:28.444149 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:28 crc kubenswrapper[4821]: E0930 17:03:28.444313 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:28 crc kubenswrapper[4821]: W0930 17:03:28.485311 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Sep 30 17:03:28 crc kubenswrapper[4821]: E0930 17:03:28.485400 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.761553 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.763805 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0" exitCode=255 Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.763879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0"} Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.764031 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.770152 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.770219 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.770234 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.770750 4821 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="38cab9c949dd7fb355698c743d8dfc4c1d3afe5bdeb3ff4816b5f1a5576f6b2e" exitCode=0 Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.770885 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.771140 4821 scope.go:117] "RemoveContainer" containerID="8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.771561 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"38cab9c949dd7fb355698c743d8dfc4c1d3afe5bdeb3ff4816b5f1a5576f6b2e"} Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.771605 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.771707 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.771609 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772577 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772587 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.772589 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.773322 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.773357 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:28 crc kubenswrapper[4821]: I0930 17:03:28.773370 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.776008 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.777843 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4"} Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.777974 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.778287 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.778741 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.778770 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.778783 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.781908 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782303 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d09cf4ceeaaf2455d0e65cca9a4a2cd60ec786b9c68fafa389c4fd36b14c00a0"} Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782337 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e975958d18d7290b468f0d93af1ce6c48ec1d246435ea169eeb2c499c9870c21"} Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eef5ad4aa662a28af0f9e205e11c4e1bcfaed3d0f64db2da227c8691544266dc"} Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782365 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"01479b58bfefcc6ef047c1962b0dc9fe42e09283c4eadc852fe5cc24016fedaf"} Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782676 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:29 crc kubenswrapper[4821]: I0930 17:03:29.782687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.606816 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.607218 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.609002 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.609067 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.609109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.791589 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.791748 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.792131 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f48d55d4abbe6b78332ff6a0e852715a8425fbb25299bc5593969ff7b2156514"} Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.792191 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.792584 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.792610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.792621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.793475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.793541 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.793566 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:30 crc kubenswrapper[4821]: I0930 17:03:30.865647 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.074651 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.076934 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.077026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.077044 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.077132 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.794170 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.794358 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.795582 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.795644 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.795669 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.796152 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.796222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:31 crc kubenswrapper[4821]: I0930 17:03:31.796241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.415453 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.561422 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.561667 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.562704 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.562728 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.562736 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.796473 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.796472 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.797879 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.797910 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.797919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.797979 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.798019 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.798037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.875900 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.905731 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.906385 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.908173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.908234 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.908251 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:32 crc kubenswrapper[4821]: I0930 17:03:32.912562 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.799555 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.799622 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.799560 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.800762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.800795 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.800808 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.801583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.801668 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:33 crc kubenswrapper[4821]: I0930 17:03:33.801689 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:34 crc kubenswrapper[4821]: E0930 17:03:34.768939 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 17:03:34 crc kubenswrapper[4821]: I0930 17:03:34.801870 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:34 crc kubenswrapper[4821]: I0930 17:03:34.802819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:34 crc kubenswrapper[4821]: I0930 17:03:34.802862 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:34 crc kubenswrapper[4821]: I0930 17:03:34.802874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.561446 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.561547 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.901425 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.901544 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.903004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.903136 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:35 crc kubenswrapper[4821]: I0930 17:03:35.903168 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:38 crc kubenswrapper[4821]: I0930 17:03:38.007245 4821 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Sep 30 17:03:38 crc kubenswrapper[4821]: I0930 17:03:38.007758 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Sep 30 17:03:38 crc kubenswrapper[4821]: I0930 17:03:38.643663 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.171924 4821 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.172039 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.181762 4821 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.181860 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.367970 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.368195 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.369440 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.369492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.369527 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.426999 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.815670 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.816900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.816954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.816974 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:39 crc kubenswrapper[4821]: I0930 17:03:39.832221 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Sep 30 17:03:40 crc kubenswrapper[4821]: I0930 17:03:40.818805 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:40 crc kubenswrapper[4821]: I0930 17:03:40.820520 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:40 crc kubenswrapper[4821]: I0930 17:03:40.820561 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:40 crc kubenswrapper[4821]: I0930 17:03:40.820575 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.886077 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.887223 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.887754 4821 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.887818 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.890127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.890210 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.890235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:42 crc kubenswrapper[4821]: I0930 17:03:42.894407 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.830527 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.831156 4821 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.831248 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.832175 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.832323 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:43 crc kubenswrapper[4821]: I0930 17:03:43.832509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.167392 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.169799 4821 trace.go:236] Trace[1107138754]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:03:32.974) (total time: 11194ms): Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[1107138754]: ---"Objects listed" error: 11194ms (17:03:44.169) Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[1107138754]: [11.194912023s] [11.194912023s] END Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.169822 4821 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.172732 4821 trace.go:236] Trace[1108357024]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:03:33.896) (total time: 10276ms): Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[1108357024]: ---"Objects listed" error: 10276ms (17:03:44.172) Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[1108357024]: [10.276468069s] [10.276468069s] END Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.172772 4821 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.173359 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.175138 4821 trace.go:236] Trace[958756915]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:03:32.730) (total time: 11444ms): Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[958756915]: ---"Objects listed" error: 11444ms (17:03:44.174) Sep 30 17:03:44 crc kubenswrapper[4821]: Trace[958756915]: [11.444313046s] [11.444313046s] END Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.175188 4821 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.176469 4821 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.176744 4821 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.241043 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.246906 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.637652 4821 apiserver.go:52] "Watching apiserver" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.640428 4821 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.640805 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641334 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641362 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641414 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641509 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641531 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.641580 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.641639 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.641895 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.641962 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.644409 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.644977 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.645109 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.645206 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.645367 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.645474 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.645373 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.646154 4821 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.646441 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.648476 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.677457 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679224 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679422 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679588 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679686 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679820 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.679924 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680015 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680146 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680248 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680350 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680483 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680570 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680661 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680968 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681095 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681202 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680288 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680510 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680795 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.680879 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681005 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681305 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681571 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681307 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681623 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681682 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681705 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681732 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681757 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681782 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681808 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681832 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681857 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681880 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681901 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681927 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681952 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.681977 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682011 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682034 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682099 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682108 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682145 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682287 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682589 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682659 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682730 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682741 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682787 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682818 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682845 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682869 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682892 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682913 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682934 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682956 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.682978 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683001 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683024 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683046 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683150 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683196 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683217 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683241 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683267 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683326 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683354 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683380 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683514 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683543 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683575 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683602 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683601 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683630 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683660 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683690 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683718 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683745 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683771 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683792 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683813 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683835 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683859 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683883 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683905 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683933 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683934 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.683955 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.684055 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.684198 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.684800 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.684969 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685050 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685315 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685439 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685434 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.684122 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685849 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685893 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685928 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.685977 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686007 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686032 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686418 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686601 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.686731 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687034 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687066 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687235 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687252 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687070 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687596 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687632 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687653 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687671 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.687690 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689161 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689208 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689266 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689321 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689350 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689404 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689435 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689485 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689520 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689574 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689658 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689692 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689746 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689807 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.689845 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.690740 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.692159 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.692959 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.693026 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.693249 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.693457 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.693532 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.693781 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694096 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694360 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694369 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694473 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694568 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.694816 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.695002 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696059 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696442 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696473 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696694 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696565 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696805 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.696925 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697093 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697266 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697332 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697428 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697536 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697583 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697722 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697855 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698011 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.697952 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698167 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698201 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698419 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698445 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698510 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698811 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698922 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698923 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.698999 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.699118 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.699330 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.699696 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.699891 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.700074 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.700215 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701011 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701135 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701278 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701407 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701486 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701677 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701751 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701766 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.701967 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702132 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702329 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702465 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702734 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702902 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.702942 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703032 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703140 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703189 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703309 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703354 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703385 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703502 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703938 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.704442 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.704674 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.704848 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.706880 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.709403 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.717723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.717871 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.717844 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718030 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718260 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718390 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718490 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718635 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718656 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718711 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718773 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.718977 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719113 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719329 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719332 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719630 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.719818 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720047 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720109 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720447 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720621 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.720865 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.721128 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.721356 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.721554 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.721784 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.722012 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.722342 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.722558 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.722860 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.723306 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.723658 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.723891 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.724348 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.724744 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.725021 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.725204 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.725510 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.725841 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.726060 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.726703 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.726833 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.726947 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.727709 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.728615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.729125 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.729388 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.729582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.729915 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.729928 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.730048 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.730226 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.730776 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.731017 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.731542 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.730918 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.731900 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.732177 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.732199 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.709925 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.732631 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.733526 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.733583 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.733781 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.734002 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.734588 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.734810 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.734968 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735021 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735444 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735447 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735725 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735934 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.735945 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.736293 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.736382 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:03:45.236359331 +0000 UTC m=+21.141405275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.736740 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.736842 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.737189 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.703527 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.737408 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.737505 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.737668 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.739840 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.739966 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740243 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740659 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740764 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740897 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741022 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741191 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741285 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741375 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741483 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741628 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741739 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741834 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.741935 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.742028 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.742158 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.742949 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743107 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743236 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743351 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743463 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743587 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743685 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743782 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743904 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744036 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744342 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744455 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.745209 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749433 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749527 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749564 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749601 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749642 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749710 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.749746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.750442 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.737769 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.738004 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.738733 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.739300 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.739556 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740525 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740544 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740652 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.740739 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.742794 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743327 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.743627 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744125 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.744720 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.753224 4821 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.754055 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.754285 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.754319 4821 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.756703 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.756871 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.757112 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.757230 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.757491 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.759290 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:45.259215624 +0000 UTC m=+21.164261568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.760233 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.760550 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.760589 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.761149 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.761461 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.761986 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762287 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762313 4821 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762330 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762347 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762360 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762374 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762388 4821 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762403 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762417 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762431 4821 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762447 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762461 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762473 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762487 4821 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762498 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762510 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762523 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762535 4821 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762548 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762560 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762572 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762585 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762597 4821 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762610 4821 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762623 4821 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762636 4821 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762649 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762661 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762677 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762691 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762704 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762720 4821 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762733 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762746 4821 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762759 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762772 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762784 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762798 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762810 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762822 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762834 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762845 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762859 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762872 4821 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762883 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762895 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762910 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762921 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762932 4821 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762945 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762957 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762975 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762987 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.762997 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763008 4821 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763019 4821 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763030 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763041 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763055 4821 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763066 4821 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763078 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763117 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763129 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763142 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763155 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763169 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763180 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763192 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763203 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763213 4821 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763224 4821 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763237 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763273 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763285 4821 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763296 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763306 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763318 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763329 4821 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763340 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763351 4821 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763368 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763381 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763393 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763405 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763416 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763427 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763440 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763453 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763466 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763478 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763488 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763499 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763510 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763521 4821 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763532 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763543 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763555 4821 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763565 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763576 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763590 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763602 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763614 4821 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763625 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763636 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763646 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763658 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763668 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763679 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763689 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763700 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763711 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763721 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763734 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763745 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763756 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763767 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763778 4821 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763820 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763835 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763848 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763860 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763874 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763886 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.763900 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.764859 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.765098 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.765277 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.765466 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.765657 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.765859 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.766275 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.766791 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.768326 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.771006 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.772431 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.772542 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.772642 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:45.272619985 +0000 UTC m=+21.177666139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.773601 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.774464 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.775029 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.775259 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.777229 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.777297 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.777352 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.777618 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.777941 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.779388 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.781444 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.781985 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.782266 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.783817 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.784396 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.784775 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.784853 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.787971 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.788581 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.788636 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.788768 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.788791 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.788806 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.788867 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:45.28884808 +0000 UTC m=+21.193894024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.789639 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.790456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.790633 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.791465 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.791501 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.791521 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.791600 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:45.291574972 +0000 UTC m=+21.196621106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.792057 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.793032 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.801102 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.804826 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.805127 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.805209 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.805550 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.809258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.813559 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.814165 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.819008 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.820264 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.821221 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.821521 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.821843 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.822949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.824291 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.821880 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.825470 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.828404 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.829112 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.831547 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.835283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.835293 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.840905 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.841018 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.841034 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.841546 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.841633 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.843988 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.844512 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.846122 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4" exitCode=255 Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.853282 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.854194 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.854548 4821 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.854613 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.854682 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.855490 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.857681 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.858146 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.859679 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868002 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868437 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868526 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868576 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868593 4821 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868607 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868621 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868636 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868621 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868696 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868760 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868650 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868792 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868806 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868818 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868830 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868841 4821 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868855 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868868 4821 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868880 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868890 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868901 4821 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868912 4821 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868924 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868935 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868945 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868955 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868966 4821 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868976 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868987 4821 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.868998 4821 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869010 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869022 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869033 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869044 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869054 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869064 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869076 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869120 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869131 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869141 4821 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869151 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869161 4821 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869173 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869185 4821 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869196 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869207 4821 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869217 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869227 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869242 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869253 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869268 4821 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869279 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869290 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869301 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869313 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869323 4821 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869334 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869344 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869354 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869365 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869376 4821 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869387 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869399 4821 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869410 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869421 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869433 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869445 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869456 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869469 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869480 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869532 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869549 4821 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869561 4821 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.869709 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.870332 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.872392 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.872962 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.874049 4821 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.874194 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.876427 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.877524 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.878236 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.880431 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.881706 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.882790 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.882859 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.883513 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.886930 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.889377 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.890388 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.891656 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.892354 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.896672 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.897422 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.897392 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.898727 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.899676 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.900900 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.901410 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.902393 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.903057 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.903744 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.904738 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.905291 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4"} Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.905385 4821 scope.go:117] "RemoveContainer" containerID="8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.909844 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.920151 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.920940 4821 scope.go:117] "RemoveContainer" containerID="eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4" Sep 30 17:03:44 crc kubenswrapper[4821]: E0930 17:03:44.921186 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.923154 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.939257 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.951965 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.955118 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.962725 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.967859 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.976710 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.983005 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: W0930 17:03:44.986146 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-14118284b0c8008c3377c65e0cffb9efcbfa9d7580a936c9375ba88e595cbe5a WatchSource:0}: Error finding container 14118284b0c8008c3377c65e0cffb9efcbfa9d7580a936c9375ba88e595cbe5a: Status 404 returned error can't find the container with id 14118284b0c8008c3377c65e0cffb9efcbfa9d7580a936c9375ba88e595cbe5a Sep 30 17:03:44 crc kubenswrapper[4821]: I0930 17:03:44.993699 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:44 crc kubenswrapper[4821]: W0930 17:03:44.997462 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-96b3718ad25ca403e3ffcac619b8b88d620d2c5fe5539098b090b71156172077 WatchSource:0}: Error finding container 96b3718ad25ca403e3ffcac619b8b88d620d2c5fe5539098b090b71156172077: Status 404 returned error can't find the container with id 96b3718ad25ca403e3ffcac619b8b88d620d2c5fe5539098b090b71156172077 Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.007326 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:28Z\\\",\\\"message\\\":\\\"W0930 17:03:27.880325 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 17:03:27.880911 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759251807 cert, and key in /tmp/serving-cert-1009007902/serving-signer.crt, /tmp/serving-cert-1009007902/serving-signer.key\\\\nI0930 17:03:28.290711 1 observer_polling.go:159] Starting file observer\\\\nW0930 17:03:28.294680 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 17:03:28.294892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:28.296922 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1009007902/tls.crt::/tmp/serving-cert-1009007902/tls.key\\\\\\\"\\\\nF0930 17:03:28.538899 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.020772 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.032005 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.043407 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.054569 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.069235 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.089175 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.098811 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.119115 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.129359 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.141750 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:28Z\\\",\\\"message\\\":\\\"W0930 17:03:27.880325 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 17:03:27.880911 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759251807 cert, and key in /tmp/serving-cert-1009007902/serving-signer.crt, /tmp/serving-cert-1009007902/serving-signer.key\\\\nI0930 17:03:28.290711 1 observer_polling.go:159] Starting file observer\\\\nW0930 17:03:28.294680 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 17:03:28.294892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:28.296922 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1009007902/tls.crt::/tmp/serving-cert-1009007902/tls.key\\\\\\\"\\\\nF0930 17:03:28.538899 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.159711 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.171512 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.273583 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.273781 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:03:46.273753162 +0000 UTC m=+22.178799106 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.273834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.273874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.274024 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.274118 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:46.27409463 +0000 UTC m=+22.179140574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.274213 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.274253 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:46.274243253 +0000 UTC m=+22.179289197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.375138 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.375199 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375338 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375356 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375357 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375386 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375401 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375482 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:46.375454259 +0000 UTC m=+22.280500223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.375369 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.376009 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:46.37599188 +0000 UTC m=+22.281037854 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.706657 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.706744 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.706830 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.706928 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.849898 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"14118284b0c8008c3377c65e0cffb9efcbfa9d7580a936c9375ba88e595cbe5a"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.851965 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.852034 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bfa4f69861553626f9b9c449835fc49c52a20853085b990d873144b361dce102"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.854149 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.856781 4821 scope.go:117] "RemoveContainer" containerID="eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4" Sep 30 17:03:45 crc kubenswrapper[4821]: E0930 17:03:45.857115 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.857775 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.857848 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.857869 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"96b3718ad25ca403e3ffcac619b8b88d620d2c5fe5539098b090b71156172077"} Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.873242 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.893881 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.926027 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.937944 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.953444 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.969905 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5814b112ff0fc84fbb623f9ab16d9e3c2b87fdf044b6766712866d542c3af0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:28Z\\\",\\\"message\\\":\\\"W0930 17:03:27.880325 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 17:03:27.880911 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759251807 cert, and key in /tmp/serving-cert-1009007902/serving-signer.crt, /tmp/serving-cert-1009007902/serving-signer.key\\\\nI0930 17:03:28.290711 1 observer_polling.go:159] Starting file observer\\\\nW0930 17:03:28.294680 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 17:03:28.294892 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:28.296922 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1009007902/tls.crt::/tmp/serving-cert-1009007902/tls.key\\\\\\\"\\\\nF0930 17:03:28.538899 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:45 crc kubenswrapper[4821]: I0930 17:03:45.985370 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.000191 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:45Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.023466 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.053440 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.068692 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.092114 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.112943 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.143281 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.170675 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.203924 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hbtzr"] Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.204337 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.205235 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q2xpd"] Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.205548 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.207088 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.207742 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.209118 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.209164 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.210796 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.210892 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.211183 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.212043 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.227610 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.259857 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.283833 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.284008 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:03:48.283984205 +0000 UTC m=+24.189030149 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.284170 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.284383 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.284951 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5fg\" (UniqueName: \"kubernetes.io/projected/1c2ce348-eadc-4629-a03f-fb8924b5b434-kube-api-access-md5fg\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.285107 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:48.285029049 +0000 UTC m=+24.190074993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285190 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285300 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80792f77-19d5-48f4-b3ea-5d53f770cb33-hosts-file\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285334 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zq9\" (UniqueName: \"kubernetes.io/projected/80792f77-19d5-48f4-b3ea-5d53f770cb33-kube-api-access-k5zq9\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285356 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c2ce348-eadc-4629-a03f-fb8924b5b434-rootfs\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285373 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c2ce348-eadc-4629-a03f-fb8924b5b434-proxy-tls\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.285390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c2ce348-eadc-4629-a03f-fb8924b5b434-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.285397 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.285591 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:48.28556461 +0000 UTC m=+24.190610754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.287326 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.307413 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.327461 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.355613 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.383193 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386538 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80792f77-19d5-48f4-b3ea-5d53f770cb33-hosts-file\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386586 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zq9\" (UniqueName: \"kubernetes.io/projected/80792f77-19d5-48f4-b3ea-5d53f770cb33-kube-api-access-k5zq9\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386605 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c2ce348-eadc-4629-a03f-fb8924b5b434-rootfs\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386626 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c2ce348-eadc-4629-a03f-fb8924b5b434-proxy-tls\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386648 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c2ce348-eadc-4629-a03f-fb8924b5b434-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386678 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5fg\" (UniqueName: \"kubernetes.io/projected/1c2ce348-eadc-4629-a03f-fb8924b5b434-kube-api-access-md5fg\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386701 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386722 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.386806 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c2ce348-eadc-4629-a03f-fb8924b5b434-rootfs\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.386888 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.386910 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.386924 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.386993 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:48.38697349 +0000 UTC m=+24.292019434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.387030 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.387053 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.387065 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.387135 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:48.387117174 +0000 UTC m=+24.292163118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.387206 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80792f77-19d5-48f4-b3ea-5d53f770cb33-hosts-file\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.387558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c2ce348-eadc-4629-a03f-fb8924b5b434-mcd-auth-proxy-config\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.391691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c2ce348-eadc-4629-a03f-fb8924b5b434-proxy-tls\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.401829 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.414777 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5fg\" (UniqueName: \"kubernetes.io/projected/1c2ce348-eadc-4629-a03f-fb8924b5b434-kube-api-access-md5fg\") pod \"machine-config-daemon-q2xpd\" (UID: \"1c2ce348-eadc-4629-a03f-fb8924b5b434\") " pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.419762 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zq9\" (UniqueName: \"kubernetes.io/projected/80792f77-19d5-48f4-b3ea-5d53f770cb33-kube-api-access-k5zq9\") pod \"node-resolver-hbtzr\" (UID: \"80792f77-19d5-48f4-b3ea-5d53f770cb33\") " pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.434319 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.454601 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.470242 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.517673 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hbtzr" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.523607 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:03:46 crc kubenswrapper[4821]: W0930 17:03:46.534748 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80792f77_19d5_48f4_b3ea_5d53f770cb33.slice/crio-3b36f4b79ad2d417d6447869d7bbaa186861f3948aa65f54cc40ef849b023acd WatchSource:0}: Error finding container 3b36f4b79ad2d417d6447869d7bbaa186861f3948aa65f54cc40ef849b023acd: Status 404 returned error can't find the container with id 3b36f4b79ad2d417d6447869d7bbaa186861f3948aa65f54cc40ef849b023acd Sep 30 17:03:46 crc kubenswrapper[4821]: W0930 17:03:46.537933 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2ce348_eadc_4629_a03f_fb8924b5b434.slice/crio-b2462caefdaec13cb51fc6bbfa7f98a9516e72a1708446973f2701e5a65ff2f2 WatchSource:0}: Error finding container b2462caefdaec13cb51fc6bbfa7f98a9516e72a1708446973f2701e5a65ff2f2: Status 404 returned error can't find the container with id b2462caefdaec13cb51fc6bbfa7f98a9516e72a1708446973f2701e5a65ff2f2 Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.586230 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.709413 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.709786 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.713307 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.714155 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.721174 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.722304 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.722953 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.724269 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.862339 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1"} Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.862391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096"} Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.862402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"b2462caefdaec13cb51fc6bbfa7f98a9516e72a1708446973f2701e5a65ff2f2"} Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.864140 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hbtzr" event={"ID":"80792f77-19d5-48f4-b3ea-5d53f770cb33","Type":"ContainerStarted","Data":"a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2"} Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.864178 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hbtzr" event={"ID":"80792f77-19d5-48f4-b3ea-5d53f770cb33","Type":"ContainerStarted","Data":"3b36f4b79ad2d417d6447869d7bbaa186861f3948aa65f54cc40ef849b023acd"} Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.864796 4821 scope.go:117] "RemoveContainer" containerID="eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4" Sep 30 17:03:46 crc kubenswrapper[4821]: E0930 17:03:46.864913 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.881376 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.896858 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.912426 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.929596 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.944029 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.962918 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.976557 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:46 crc kubenswrapper[4821]: I0930 17:03:46.991725 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:46Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.012720 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.029654 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.041630 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.053609 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.071860 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.086631 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.103448 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.109624 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jpnpn"] Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.110615 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.111482 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7m5w"] Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.112298 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h9sjg"] Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.112732 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.112811 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.113473 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.113843 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.113898 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.114049 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.114232 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.116249 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.116810 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.117187 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.117479 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.118501 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.119111 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.120601 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.123305 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.126340 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.133394 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.151988 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.163606 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.177717 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.191130 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194659 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194680 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cnibin\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194720 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbggh\" (UniqueName: \"kubernetes.io/projected/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-kube-api-access-kbggh\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194784 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194802 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194820 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-netns\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194836 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwk2\" (UniqueName: \"kubernetes.io/projected/c84981f2-eb86-4d0d-9322-db1b62feeac8-kube-api-access-5xwk2\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194901 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-os-release\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.194953 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195001 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195037 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-socket-dir-parent\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195071 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-bin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195101 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-daemon-config\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195167 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195192 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-system-cni-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195209 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195238 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195257 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-k8s-cni-cncf-io\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195276 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-conf-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195299 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195316 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195351 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-os-release\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195369 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195398 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195415 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-cnibin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-hostroot\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195512 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195532 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195550 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmhn\" (UniqueName: \"kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195571 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195590 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-multus-certs\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195675 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195695 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-etc-kubernetes\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195719 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195739 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195755 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195807 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-system-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195823 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-cni-binary-copy\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195843 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-multus\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.195859 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-kubelet\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.208321 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.226257 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.238032 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.259428 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.279504 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.293072 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296263 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296315 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-cnibin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296333 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-hostroot\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296361 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296380 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296397 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqmhn\" (UniqueName: \"kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296414 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296430 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-multus-certs\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297180 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297250 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296554 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-cnibin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-multus-certs\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296582 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-hostroot\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296636 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297124 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.296517 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-etc-kubernetes\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297416 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297465 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-etc-kubernetes\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297488 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297511 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297530 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297577 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.297546 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298353 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-system-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298376 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-cni-binary-copy\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298393 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-multus\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298408 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-kubelet\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298425 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-system-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298458 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298434 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298492 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-kubelet\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298506 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298504 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-multus\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298575 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298553 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298674 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cnibin\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbggh\" (UniqueName: \"kubernetes.io/projected/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-kube-api-access-kbggh\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298787 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298807 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298831 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-netns\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298859 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298918 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cnibin\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299182 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-cni-binary-copy\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.298868 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwk2\" (UniqueName: \"kubernetes.io/projected/c84981f2-eb86-4d0d-9322-db1b62feeac8-kube-api-access-5xwk2\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299809 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-os-release\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299837 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299860 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299882 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-socket-dir-parent\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299374 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-netns\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299905 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-bin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299252 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-cni-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299933 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-daemon-config\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299977 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-system-cni-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300014 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300055 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-k8s-cni-cncf-io\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300091 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-conf-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300118 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-os-release\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300176 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-os-release\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.299703 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300247 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300294 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-socket-dir-parent\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300323 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-var-lib-cni-bin\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300732 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300842 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-daemon-config\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-system-cni-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300904 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300925 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300943 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-host-run-k8s-cni-cncf-io\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-multus-conf-dir\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.300996 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c84981f2-eb86-4d0d-9322-db1b62feeac8-os-release\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.301013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.301331 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.301362 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.301424 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.303826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.321691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqmhn\" (UniqueName: \"kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn\") pod \"ovnkube-node-k7m5w\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.323410 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.326254 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbggh\" (UniqueName: \"kubernetes.io/projected/22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0-kube-api-access-kbggh\") pod \"multus-additional-cni-plugins-jpnpn\" (UID: \"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\") " pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.327558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwk2\" (UniqueName: \"kubernetes.io/projected/c84981f2-eb86-4d0d-9322-db1b62feeac8-kube-api-access-5xwk2\") pod \"multus-h9sjg\" (UID: \"c84981f2-eb86-4d0d-9322-db1b62feeac8\") " pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.361670 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.383896 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.411485 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.428248 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.439275 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: W0930 17:03:47.441175 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c1c4c3_b314_4e7b_9f4d_e83a4c9c91b0.slice/crio-bc36b18af69dd1b86b2cdef53a622c66f033bf637c20dfa513ba7e03251a294a WatchSource:0}: Error finding container bc36b18af69dd1b86b2cdef53a622c66f033bf637c20dfa513ba7e03251a294a: Status 404 returned error can't find the container with id bc36b18af69dd1b86b2cdef53a622c66f033bf637c20dfa513ba7e03251a294a Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.443676 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9sjg" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.451204 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.471498 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.539095 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.705975 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:47 crc kubenswrapper[4821]: E0930 17:03:47.706108 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.706229 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:47 crc kubenswrapper[4821]: E0930 17:03:47.706285 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.876661 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.879671 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerStarted","Data":"9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.879711 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerStarted","Data":"065077d8c5b83c6296443ccd798d8f45cbaf5646e715ea6123463c543c5048c3"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.881872 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerStarted","Data":"6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.881911 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerStarted","Data":"bc36b18af69dd1b86b2cdef53a622c66f033bf637c20dfa513ba7e03251a294a"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.883300 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" exitCode=0 Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.883626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.883651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"6c27a98f860004d05b6d45efd2de111a51e169967e8a1a8744009479a5628e2d"} Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.898053 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.915783 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.936284 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:47 crc kubenswrapper[4821]: I0930 17:03:47.976419 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:47Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.018739 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.041046 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.064948 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.082019 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.098032 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.109979 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.125045 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.138859 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.154021 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.165907 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.178433 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.191826 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.206793 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.222462 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.240422 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.258035 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.271839 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.286570 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.304915 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.312288 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.312377 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.312410 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.312545 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.312568 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:03:52.312514702 +0000 UTC m=+28.217560646 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.312639 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:52.312625334 +0000 UTC m=+28.217671498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.312583 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.312745 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:52.312733717 +0000 UTC m=+28.217779871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.325985 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.340006 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.367480 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.412940 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.412997 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413123 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413142 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413153 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413197 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:52.413184545 +0000 UTC m=+28.318230489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413123 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413224 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413234 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.413272 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:03:52.413263777 +0000 UTC m=+28.318309721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.706923 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:48 crc kubenswrapper[4821]: E0930 17:03:48.707888 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.890368 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.890416 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.890433 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.890444 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.891738 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79" exitCode=0 Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.892301 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79"} Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.935426 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.962244 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.977110 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:48 crc kubenswrapper[4821]: I0930 17:03:48.993620 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:48Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.009810 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.031204 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.046299 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.070865 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.094485 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.110071 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.125804 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.141589 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.155922 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.706960 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.707058 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:49 crc kubenswrapper[4821]: E0930 17:03:49.707135 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:49 crc kubenswrapper[4821]: E0930 17:03:49.707224 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.899754 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.899838 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.901855 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d" exitCode=0 Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.901905 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d"} Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.916115 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.944015 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.959135 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:49 crc kubenswrapper[4821]: I0930 17:03:49.981364 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.000386 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:49Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.018576 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.037006 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.049995 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.061722 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.099902 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.114112 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.128457 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.146434 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.217726 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-55rq2"] Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.218213 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.220346 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.220886 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.221247 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.221720 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.245003 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.269351 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.285905 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.303797 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.325074 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.334606 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13b69688-7cac-4423-b4e1-553755af1baf-serviceca\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.334650 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b69688-7cac-4423-b4e1-553755af1baf-host\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.334775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvmq\" (UniqueName: \"kubernetes.io/projected/13b69688-7cac-4423-b4e1-553755af1baf-kube-api-access-cgvmq\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.339006 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.353662 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.366010 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.376962 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.387782 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.400658 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.414339 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.431277 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.436167 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvmq\" (UniqueName: \"kubernetes.io/projected/13b69688-7cac-4423-b4e1-553755af1baf-kube-api-access-cgvmq\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.436296 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b69688-7cac-4423-b4e1-553755af1baf-host\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.436366 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13b69688-7cac-4423-b4e1-553755af1baf-serviceca\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.436503 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b69688-7cac-4423-b4e1-553755af1baf-host\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.438290 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13b69688-7cac-4423-b4e1-553755af1baf-serviceca\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.442847 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.459194 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvmq\" (UniqueName: \"kubernetes.io/projected/13b69688-7cac-4423-b4e1-553755af1baf-kube-api-access-cgvmq\") pod \"node-ca-55rq2\" (UID: \"13b69688-7cac-4423-b4e1-553755af1baf\") " pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.555861 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55rq2" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.573805 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.577215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.577280 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.577293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.577438 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.585357 4821 kubelet_node_status.go:115] "Node was previously registered" node="crc" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.585669 4821 kubelet_node_status.go:79] "Successfully registered node" node="crc" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.586608 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.586727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.586807 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.586899 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.586974 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.600397 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.605565 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.605653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.605671 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.605699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.605714 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.630747 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.634949 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.635074 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.635178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.635276 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.635357 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.649446 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.653485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.653525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.653534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.653552 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.653563 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.666625 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.670234 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.670343 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.670415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.670476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.670528 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.681786 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.682357 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.684345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.684470 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.684563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.684650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.684727 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.706441 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:50 crc kubenswrapper[4821]: E0930 17:03:50.707024 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.789606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.791413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.791524 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.791602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.791688 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.894614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.894654 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.894664 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.894679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.894688 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.906372 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55rq2" event={"ID":"13b69688-7cac-4423-b4e1-553755af1baf","Type":"ContainerStarted","Data":"7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.906425 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55rq2" event={"ID":"13b69688-7cac-4423-b4e1-553755af1baf","Type":"ContainerStarted","Data":"d6b8a35c0b95b57651470e7e54a8f4597bf023718dfe3d718896a5094d919cc1"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.909502 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224" exitCode=0 Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.909536 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224"} Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.925043 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.944390 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.967815 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.981075 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.994636 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.996308 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.996342 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.996355 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.996375 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:50 crc kubenswrapper[4821]: I0930 17:03:50.996418 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:50Z","lastTransitionTime":"2025-09-30T17:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.014628 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.036716 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.050831 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.063938 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.075240 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.089051 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.098965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.099025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.099042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.099064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.099111 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.101551 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.115051 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.128896 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.151356 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.167901 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.184754 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.197309 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.201445 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.201469 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.201478 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.201493 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.201504 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.209631 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.222808 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.234463 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.252531 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.266299 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.278462 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.295593 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.303727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.303767 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.303778 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.303797 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.303807 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.311518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.326838 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.340658 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.406553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.406600 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.406614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.406639 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.406654 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.508937 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.508981 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.508991 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.509006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.509019 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.611736 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.611775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.611784 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.611800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.611810 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.706954 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:51 crc kubenswrapper[4821]: E0930 17:03:51.707127 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.706959 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:51 crc kubenswrapper[4821]: E0930 17:03:51.707545 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.715773 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.715884 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.715903 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.715928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.715947 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.821468 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.822106 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.822262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.822465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.822642 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.921016 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba" exitCode=0 Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.921347 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.928691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.928728 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.928739 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.928755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.928766 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:51Z","lastTransitionTime":"2025-09-30T17:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.934221 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.953601 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.972445 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:51 crc kubenswrapper[4821]: I0930 17:03:51.989354 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.004318 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.034825 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.042181 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.042229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.042250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.042271 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.042284 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.059690 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.077822 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.095732 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.112582 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.130319 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.145140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.145212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.145223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.145259 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.145284 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.149323 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.169338 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.186510 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.200301 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.248637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.248685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.248696 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.248712 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.248728 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.351888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.351959 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.351978 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.352004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.352021 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.358268 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.358372 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.358412 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.358525 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:04:00.358497816 +0000 UTC m=+36.263543770 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.358551 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.358581 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.358592 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:00.358584238 +0000 UTC m=+36.263630182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.358718 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:00.358690041 +0000 UTC m=+36.263735985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.454816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.454874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.454888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.454913 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.454929 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.459300 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.459362 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.459568 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.459608 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.459626 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.459702 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:00.459681901 +0000 UTC m=+36.364727865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.460147 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.460174 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.460185 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.460227 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:00.460216613 +0000 UTC m=+36.365262577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.557475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.557530 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.557543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.557557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.557566 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.661017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.661128 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.661153 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.661178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.661195 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.706280 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:52 crc kubenswrapper[4821]: E0930 17:03:52.706522 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.763547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.763582 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.763594 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.763620 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.763633 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.865410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.865450 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.865463 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.865480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.865491 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.940150 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577" exitCode=0 Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.940193 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.966518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.968270 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.968321 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.968333 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.968385 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.968404 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:52Z","lastTransitionTime":"2025-09-30T17:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:52 crc kubenswrapper[4821]: I0930 17:03:52.984430 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.020967 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.037416 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.051122 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.064254 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.070954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.071054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.071068 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.071100 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.071114 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.081838 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.094008 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.108032 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.121248 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.135279 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.151363 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.168247 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.174092 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.174140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.174152 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.174169 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.174183 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.182828 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.276345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.276386 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.276396 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.276412 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.276422 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.379322 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.379364 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.379373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.379394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.379408 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.481716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.481739 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.481749 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.481763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.481773 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.584104 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.584163 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.584180 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.584197 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.584206 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.686949 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.686996 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.687006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.687026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.687037 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.706855 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.706876 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:53 crc kubenswrapper[4821]: E0930 17:03:53.707026 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:53 crc kubenswrapper[4821]: E0930 17:03:53.707195 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.790197 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.790249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.790262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.790288 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.790312 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.892933 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.892982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.892996 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.893016 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.893030 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:53Z","lastTransitionTime":"2025-09-30T17:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.945379 4821 generic.go:334] "Generic (PLEG): container finished" podID="22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0" containerID="854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad" exitCode=0 Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.945435 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerDied","Data":"854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.962260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4"} Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.963743 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.963823 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:53 crc kubenswrapper[4821]: I0930 17:03:53.988415 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.001694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.001734 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.001745 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.001762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.001775 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.028750 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.046035 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.054714 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.070648 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.073481 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.088762 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.104498 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.104534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.104545 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.104562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.104576 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.105222 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.115341 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.128240 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.150875 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.165882 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.182294 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.195346 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.206121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.206154 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.206164 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.206179 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.206190 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.210344 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.224415 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.238688 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.250951 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.266399 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.283781 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.298638 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.308832 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.308874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.308885 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.308903 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.308915 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.315376 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.333764 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.357035 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.375171 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.388994 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.404263 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.411400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.411621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.411649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.411671 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.411689 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.418390 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.430666 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.444518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.514853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.514916 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.514955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.514979 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.514993 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.617464 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.617506 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.617517 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.617532 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.617544 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.706868 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:54 crc kubenswrapper[4821]: E0930 17:03:54.707102 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.719502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.719555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.719584 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.719610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.719627 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.727308 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.741510 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.753807 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.766656 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.791990 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.806264 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.817541 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.822750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.822786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.822799 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.822820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.822836 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.832518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.848638 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.863695 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.877493 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.898853 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.913548 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.924846 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.924888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.924899 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.924919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.924933 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:54Z","lastTransitionTime":"2025-09-30T17:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.928296 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.969817 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" event={"ID":"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0","Type":"ContainerStarted","Data":"8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12"} Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.969892 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:03:54 crc kubenswrapper[4821]: I0930 17:03:54.989487 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.006071 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.022518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.027198 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.027263 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.027272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.027292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.027303 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.045178 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.063154 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.095653 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.112622 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.125281 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.130270 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.130342 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.130359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.130390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.130410 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.142052 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.165301 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.186653 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.203229 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.218587 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.227645 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.232557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.232582 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.232592 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.232606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.232616 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.335965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.336013 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.336026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.336041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.336052 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.438973 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.439012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.439021 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.439035 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.439046 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.542241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.542290 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.542301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.542319 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.542331 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.645569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.645611 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.645621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.645639 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.645650 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.707024 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.707142 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:55 crc kubenswrapper[4821]: E0930 17:03:55.707269 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:55 crc kubenswrapper[4821]: E0930 17:03:55.707382 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.749436 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.749475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.749484 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.749502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.749514 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.852178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.852230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.852246 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.852268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.852285 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.955642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.955699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.955711 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.955727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.955738 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:55Z","lastTransitionTime":"2025-09-30T17:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:55 crc kubenswrapper[4821]: I0930 17:03:55.972881 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.052154 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.058215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.058257 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.058274 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.058293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.058304 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.161185 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.161241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.161258 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.161280 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.161293 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.265182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.266064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.266223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.266337 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.266468 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.369948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.370003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.370017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.370041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.370058 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.472550 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.472574 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.472583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.472596 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.472609 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.575479 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.575937 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.576032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.576162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.576257 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.679492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.679743 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.679803 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.679912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.679982 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.706064 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:56 crc kubenswrapper[4821]: E0930 17:03:56.706243 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.782307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.782337 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.782345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.782359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.782367 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.885455 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.885492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.885501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.885515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.885524 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.978781 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/0.log" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.983423 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4" exitCode=1 Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.983471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4"} Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.984282 4821 scope.go:117] "RemoveContainer" containerID="8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.987764 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.988053 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.988146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.988320 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:56 crc kubenswrapper[4821]: I0930 17:03:56.988396 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:56Z","lastTransitionTime":"2025-09-30T17:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.010770 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.028183 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.040613 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.066693 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:56Z\\\",\\\"message\\\":\\\"go:141\\\\nI0930 17:03:56.454017 6031 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:03:56.454707 6031 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 17:03:56.454730 6031 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 17:03:56.454743 6031 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0930 17:03:56.454764 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 17:03:56.454774 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 17:03:56.454802 6031 factory.go:656] Stopping watch factory\\\\nI0930 17:03:56.454824 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 17:03:56.454850 6031 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0930 17:03:56.455009 6031 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:03:56.455026 6031 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:03:56.455032 6031 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:03:56.455039 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.084774 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.091205 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.091250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.091268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.091292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.091309 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.109725 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.122956 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.142457 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.158781 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.175558 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.191439 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.194683 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.194705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.194714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.194729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.194739 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.202931 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.219990 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.238612 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.297427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.297479 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.297492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.297528 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.297547 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.399772 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.399830 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.399845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.399863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.400254 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.503791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.503838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.503849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.503873 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.503899 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.607044 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.607140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.607158 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.607184 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.607213 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.706629 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.706672 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:57 crc kubenswrapper[4821]: E0930 17:03:57.706847 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:57 crc kubenswrapper[4821]: E0930 17:03:57.706999 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.709416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.709460 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.709471 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.709490 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.709502 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.812452 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.812492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.812505 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.812537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.812554 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.915181 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.915232 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.915248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.915269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.915283 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:57Z","lastTransitionTime":"2025-09-30T17:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.990618 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/1.log" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.991588 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/0.log" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.995385 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb" exitCode=1 Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.995475 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb"} Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.995538 4821 scope.go:117] "RemoveContainer" containerID="8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4" Sep 30 17:03:57 crc kubenswrapper[4821]: I0930 17:03:57.996678 4821 scope.go:117] "RemoveContainer" containerID="276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb" Sep 30 17:03:57 crc kubenswrapper[4821]: E0930 17:03:57.996902 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.017200 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.017241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.017251 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.017271 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.017283 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.021377 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.035805 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.048167 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.061668 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.075880 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.101169 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.116860 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.119837 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.119911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.119921 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.119935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.119946 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.137325 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.159609 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.179172 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.205156 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.219044 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.222837 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.222922 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.222941 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.222968 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.222987 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.234374 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.255444 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ade2123ddf55d6476db499055c2d293c1e527e58506b4c7d5db874a65d440b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:56Z\\\",\\\"message\\\":\\\"go:141\\\\nI0930 17:03:56.454017 6031 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:03:56.454707 6031 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 17:03:56.454730 6031 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 17:03:56.454743 6031 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0930 17:03:56.454764 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 17:03:56.454774 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 17:03:56.454802 6031 factory.go:656] Stopping watch factory\\\\nI0930 17:03:56.454824 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 17:03:56.454850 6031 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0930 17:03:56.455009 6031 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:03:56.455026 6031 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:03:56.455032 6031 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:03:56.455039 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:58Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.325774 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.325818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.325828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.325848 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.325861 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.429123 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.429196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.429216 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.429246 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.429269 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.531289 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.531322 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.531332 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.531350 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.531360 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.634291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.634328 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.634338 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.634356 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.634369 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.706398 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:03:58 crc kubenswrapper[4821]: E0930 17:03:58.706736 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.707830 4821 scope.go:117] "RemoveContainer" containerID="eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.736813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.736861 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.736874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.736894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.736911 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.840502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.840787 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.840856 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.840919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.840973 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.945387 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.945432 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.945442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.945458 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:58 crc kubenswrapper[4821]: I0930 17:03:58.945467 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:58Z","lastTransitionTime":"2025-09-30T17:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.001949 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/1.log" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.007563 4821 scope.go:117] "RemoveContainer" containerID="276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb" Sep 30 17:03:59 crc kubenswrapper[4821]: E0930 17:03:59.007705 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.019753 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.031067 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.044935 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.047531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.047554 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.047563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.047578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.047590 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.059860 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.076935 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.090281 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.103465 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.126342 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.140998 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.150637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.150709 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.150729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.150757 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.150776 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.155064 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.168676 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.198786 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.214827 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.231509 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.253471 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.253517 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.253527 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.253543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.253556 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.355997 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.356038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.356050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.356064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.356075 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.458345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.458396 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.458410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.458428 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.458440 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.561308 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.561342 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.561354 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.561370 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.561382 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.599488 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5"] Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.600002 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.605173 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.605317 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.620126 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.633314 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdm7\" (UniqueName: \"kubernetes.io/projected/0ed48c9a-6f81-43be-9b63-906ab51dc67c-kube-api-access-4rdm7\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.633383 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.633430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.633460 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.634392 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.647107 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.658281 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.663712 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.667760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.667873 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.667952 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.668063 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.675241 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.687412 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.702497 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.706640 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.706649 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:03:59 crc kubenswrapper[4821]: E0930 17:03:59.706761 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:03:59 crc kubenswrapper[4821]: E0930 17:03:59.706884 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.716556 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.728801 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.734543 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.734756 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.734900 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdm7\" (UniqueName: \"kubernetes.io/projected/0ed48c9a-6f81-43be-9b63-906ab51dc67c-kube-api-access-4rdm7\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.735023 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.735710 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.735828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.741677 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ed48c9a-6f81-43be-9b63-906ab51dc67c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.749736 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.752246 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rdm7\" (UniqueName: \"kubernetes.io/projected/0ed48c9a-6f81-43be-9b63-906ab51dc67c-kube-api-access-4rdm7\") pod \"ovnkube-control-plane-749d76644c-9dvx5\" (UID: \"0ed48c9a-6f81-43be-9b63-906ab51dc67c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.767446 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.772547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.772610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.772625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.772646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.772660 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.785836 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.804496 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.816156 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.833139 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:03:59Z is after 2025-08-24T17:21:41Z" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.875620 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.875666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.875686 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.875712 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.875729 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.918184 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" Sep 30 17:03:59 crc kubenswrapper[4821]: W0930 17:03:59.930498 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ed48c9a_6f81_43be_9b63_906ab51dc67c.slice/crio-90798904d17c5411021f1917f6542d39c2e4d3e3dab7a602da7a18c0d5b166bc WatchSource:0}: Error finding container 90798904d17c5411021f1917f6542d39c2e4d3e3dab7a602da7a18c0d5b166bc: Status 404 returned error can't find the container with id 90798904d17c5411021f1917f6542d39c2e4d3e3dab7a602da7a18c0d5b166bc Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.978185 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.978226 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.978237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.978255 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:03:59 crc kubenswrapper[4821]: I0930 17:03:59.978272 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:03:59Z","lastTransitionTime":"2025-09-30T17:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.013223 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.015618 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.016138 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.017813 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" event={"ID":"0ed48c9a-6f81-43be-9b63-906ab51dc67c","Type":"ContainerStarted","Data":"90798904d17c5411021f1917f6542d39c2e4d3e3dab7a602da7a18c0d5b166bc"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.028242 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.044304 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.080415 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.090780 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.090813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.090823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.090839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.090849 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.104658 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.123338 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.141518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.184565 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.192925 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.193214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.193303 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.193372 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.193464 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.210435 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.230226 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.266911 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.290176 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.296128 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.296351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.296492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.296623 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.296742 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.305908 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.321357 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.344547 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.356428 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.399201 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.399241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.399253 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.399274 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.399285 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.445676 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.445807 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.445843 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.445962 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.446012 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.445999505 +0000 UTC m=+52.351045449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.446331 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.446322922 +0000 UTC m=+52.351368866 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.446375 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.446406 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.446398374 +0000 UTC m=+52.351444318 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.501915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.501985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.501999 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.502359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.502379 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.546908 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.546960 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547173 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547193 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547205 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547282 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.547265342 +0000 UTC m=+52.452311286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547351 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547360 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547367 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.547402 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.547396235 +0000 UTC m=+52.452442179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.604998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.605056 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.605096 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.605118 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.605135 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.706386 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.706616 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.708348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.708404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.708416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.708434 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.708447 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.745122 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zkvtw"] Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.745773 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.745866 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.763402 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.778823 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.795856 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811313 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811387 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811462 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.811975 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.826710 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.842199 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.850370 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.850438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdqq\" (UniqueName: \"kubernetes.io/projected/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-kube-api-access-djdqq\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.859357 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.871743 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.887836 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.908136 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.913964 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.914145 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.914274 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.914378 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.914481 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.925157 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.939991 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.951169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdqq\" (UniqueName: \"kubernetes.io/projected/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-kube-api-access-djdqq\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.951256 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.951384 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.951458 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:01.451439839 +0000 UTC m=+37.356485783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.953473 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.968746 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdqq\" (UniqueName: \"kubernetes.io/projected/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-kube-api-access-djdqq\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.971326 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.974796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.974831 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.974840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.974854 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.974863 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.986117 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: E0930 17:04:00.988115 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.992481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.992504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.992513 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.992530 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:00 crc kubenswrapper[4821]: I0930 17:04:00.992542 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:00Z","lastTransitionTime":"2025-09-30T17:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.001350 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.006333 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.010230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.010261 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.010272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.010291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.010302 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.025386 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" event={"ID":"0ed48c9a-6f81-43be-9b63-906ab51dc67c","Type":"ContainerStarted","Data":"b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.025438 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" event={"ID":"0ed48c9a-6f81-43be-9b63-906ab51dc67c","Type":"ContainerStarted","Data":"12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d"} Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.028171 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.032040 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.032076 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.032101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.032133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.032147 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.038494 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.044007 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.048025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.048183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.048373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.048544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.048706 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.052737 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.060017 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.060398 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.062031 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.062116 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.062137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.062161 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.062178 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.063629 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.078164 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.090517 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.109293 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.124185 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.138327 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.159729 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.165526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.165557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.165568 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.165585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.165596 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.174619 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.190461 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.211318 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.224966 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.240901 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.260495 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.268376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.268414 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.268447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.268463 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.268472 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.274582 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:01Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.371748 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.372147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.372349 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.372609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.373166 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.455961 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.455770 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.456528 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:02.456495894 +0000 UTC m=+38.361541868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.476539 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.476581 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.476597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.476616 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.476627 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.579458 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.579509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.579523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.579543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.579558 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.682745 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.683173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.683316 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.683456 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.683609 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.706944 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.707416 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.707610 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:01 crc kubenswrapper[4821]: E0930 17:04:01.707613 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.787717 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.787780 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.787796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.787822 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.787834 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.891761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.892477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.892802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.892843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.892886 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.995894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.995933 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.995943 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.995958 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:01 crc kubenswrapper[4821]: I0930 17:04:01.995968 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:01Z","lastTransitionTime":"2025-09-30T17:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.098112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.098730 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.098819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.098916 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.098975 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.201692 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.201753 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.201766 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.201800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.201816 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.305536 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.305770 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.305788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.305807 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.305821 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.408746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.408791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.408800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.408815 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.408825 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.468593 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:02 crc kubenswrapper[4821]: E0930 17:04:02.468868 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:02 crc kubenswrapper[4821]: E0930 17:04:02.468964 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:04.468941405 +0000 UTC m=+40.373987349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.511744 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.511792 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.511824 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.511839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.511848 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.614237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.614281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.614292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.614307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.614319 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.706481 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.706525 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:02 crc kubenswrapper[4821]: E0930 17:04:02.706618 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:02 crc kubenswrapper[4821]: E0930 17:04:02.706716 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.716504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.716558 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.716571 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.716590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.716604 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.818784 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.818827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.818840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.818877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.818894 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.921863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.921938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.921950 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.921966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:02 crc kubenswrapper[4821]: I0930 17:04:02.921994 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:02Z","lastTransitionTime":"2025-09-30T17:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.025167 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.025237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.025249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.025266 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.025277 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.127357 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.127876 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.127957 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.128056 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.128227 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.230506 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.230746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.230828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.230968 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.231060 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.333986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.334423 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.334434 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.334452 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.334468 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.437517 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.437562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.437572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.437588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.437599 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.539784 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.539820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.539830 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.539843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.539852 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.642172 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.642978 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.642994 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.643007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.643016 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.705962 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:03 crc kubenswrapper[4821]: E0930 17:04:03.706109 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.706232 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:03 crc kubenswrapper[4821]: E0930 17:04:03.706456 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.745370 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.745405 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.745415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.745428 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.745439 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.848317 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.848615 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.848697 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.848788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.848862 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.951258 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.951516 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.951622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.951721 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:03 crc kubenswrapper[4821]: I0930 17:04:03.951788 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:03Z","lastTransitionTime":"2025-09-30T17:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.053805 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.053838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.053846 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.053857 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.053867 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.156721 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.157281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.157390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.157493 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.157558 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.260149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.260538 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.260672 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.261000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.261184 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.364144 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.364441 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.364504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.364583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.364696 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.467009 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.467282 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.467346 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.467404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.467459 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.488390 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:04 crc kubenswrapper[4821]: E0930 17:04:04.488622 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:04 crc kubenswrapper[4821]: E0930 17:04:04.488767 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:08.488748566 +0000 UTC m=+44.393794510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.570136 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.570174 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.570182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.570196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.570205 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.672854 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.672911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.672932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.672956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.672974 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.706638 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:04 crc kubenswrapper[4821]: E0930 17:04:04.706848 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.707154 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:04 crc kubenswrapper[4821]: E0930 17:04:04.707472 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.725057 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.741307 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.755424 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.774181 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.777055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.777266 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.777394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.777510 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.777606 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.795744 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.813351 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.826610 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.844055 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.856654 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.868686 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881289 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881341 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881497 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881513 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.881524 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.893886 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.911724 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.922518 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.935152 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.953760 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.984467 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.984505 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.984514 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.984528 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:04 crc kubenswrapper[4821]: I0930 17:04:04.984538 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:04Z","lastTransitionTime":"2025-09-30T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.087248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.087291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.087302 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.087317 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.087326 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.190768 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.190838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.190857 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.190879 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.190895 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.293821 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.293878 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.293889 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.293907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.293919 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.396207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.396240 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.396249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.396264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.396274 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.498652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.498692 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.498701 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.498716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.498727 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.600965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.601006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.601014 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.601032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.601042 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.703843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.703877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.703888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.703904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.703914 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.706538 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.706705 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:05 crc kubenswrapper[4821]: E0930 17:04:05.706829 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:05 crc kubenswrapper[4821]: E0930 17:04:05.706986 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.806332 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.806606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.806684 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.806755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.806817 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.908981 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.909042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.909056 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.909291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:05 crc kubenswrapper[4821]: I0930 17:04:05.909313 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:05Z","lastTransitionTime":"2025-09-30T17:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.011526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.011562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.011571 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.011583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.011592 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.113568 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.113924 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.114014 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.114142 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.114216 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.216614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.217293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.217330 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.217361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.217383 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.320362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.320455 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.320465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.320480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.320493 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.422657 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.422938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.423006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.423115 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.423291 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.526148 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.526196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.526205 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.526221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.526230 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.628987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.629287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.629349 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.629411 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.629466 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.706601 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.706703 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:06 crc kubenswrapper[4821]: E0930 17:04:06.706958 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:06 crc kubenswrapper[4821]: E0930 17:04:06.707105 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.731531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.731562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.731574 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.731591 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.731603 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.833542 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.833585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.833593 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.833607 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.833616 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.936446 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.936773 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.936843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.936907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:06 crc kubenswrapper[4821]: I0930 17:04:06.936979 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:06Z","lastTransitionTime":"2025-09-30T17:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.039794 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.039826 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.039836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.039851 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.039860 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.142794 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.143057 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.143221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.143306 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.143395 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.245938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.245980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.245990 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.246005 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.246017 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.347898 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.347938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.347947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.347960 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.347969 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.450683 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.450719 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.450733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.450747 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.450757 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.553248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.553288 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.553299 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.553313 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.553325 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.655744 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.655784 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.655796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.655811 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.655820 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.706419 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.706470 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:07 crc kubenswrapper[4821]: E0930 17:04:07.706543 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:07 crc kubenswrapper[4821]: E0930 17:04:07.706605 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.758606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.758652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.758663 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.758680 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.758694 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.860860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.860887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.860895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.860946 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.860955 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.962959 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.963005 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.963017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.963036 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:07 crc kubenswrapper[4821]: I0930 17:04:07.963048 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:07Z","lastTransitionTime":"2025-09-30T17:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.065769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.065819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.065834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.065854 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.065872 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.168694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.168740 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.168753 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.168771 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.168786 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.271596 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.271645 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.271660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.271679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.271692 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.373809 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.373849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.373860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.373877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.373892 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.476198 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.476240 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.476253 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.476272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.476287 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.530249 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:08 crc kubenswrapper[4821]: E0930 17:04:08.530379 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:08 crc kubenswrapper[4821]: E0930 17:04:08.530436 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:16.530418144 +0000 UTC m=+52.435464088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.579252 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.579321 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.579333 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.579353 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.579368 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.682148 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.682193 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.682205 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.682222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.682234 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.706432 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.706470 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:08 crc kubenswrapper[4821]: E0930 17:04:08.707230 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:08 crc kubenswrapper[4821]: E0930 17:04:08.707533 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.785353 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.785412 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.785425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.785444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.785457 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.889825 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.889862 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.889871 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.889885 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.889896 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.992250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.992487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.992547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.992615 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:08 crc kubenswrapper[4821]: I0930 17:04:08.992705 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:08Z","lastTransitionTime":"2025-09-30T17:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.095149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.095203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.095216 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.095336 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.095366 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.198392 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.198675 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.198761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.198843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.198919 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.301741 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.301792 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.301807 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.301845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.301860 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.405138 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.405182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.405195 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.405213 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.405226 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.508348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.508388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.508401 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.508419 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.508432 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.611713 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.611755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.611768 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.611782 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.611795 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.706359 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.706535 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:09 crc kubenswrapper[4821]: E0930 17:04:09.706617 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:09 crc kubenswrapper[4821]: E0930 17:04:09.706822 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.714509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.714549 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.714563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.714578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.714591 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.818449 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.818488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.818498 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.818512 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.818522 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.921003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.921043 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.921055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.921072 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:09 crc kubenswrapper[4821]: I0930 17:04:09.921102 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:09Z","lastTransitionTime":"2025-09-30T17:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.023339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.023397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.023413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.023435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.023447 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.125815 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.125864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.125875 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.125895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.125906 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.229105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.229159 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.229171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.229191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.229206 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.331581 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.331661 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.331685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.331714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.331732 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.433995 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.434055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.434069 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.434112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.434128 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.536139 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.536184 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.536197 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.536214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.536227 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.638166 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.638199 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.638208 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.638221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.638231 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.706474 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.706511 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:10 crc kubenswrapper[4821]: E0930 17:04:10.706619 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:10 crc kubenswrapper[4821]: E0930 17:04:10.706856 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.741499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.741555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.741565 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.741583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.741593 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.844425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.844469 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.844480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.844496 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.844515 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.946427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.946476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.946492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.946508 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:10 crc kubenswrapper[4821]: I0930 17:04:10.946520 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:10Z","lastTransitionTime":"2025-09-30T17:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.049186 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.049239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.049251 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.049269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.049288 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.151427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.151459 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.151468 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.151483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.151492 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.254060 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.254148 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.254184 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.254201 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.254213 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.357345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.357395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.357405 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.357419 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.357429 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.414444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.414500 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.414511 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.414548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.414560 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.428439 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.432160 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.432191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.432202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.432220 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.432233 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.445288 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.448821 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.448869 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.448879 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.448898 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.448911 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.460712 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.463988 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.464034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.464047 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.464065 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.464092 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.480516 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.484757 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.484778 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.484786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.484799 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.484807 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.495979 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.496115 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.497652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.497683 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.497693 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.497709 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.497719 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.600422 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.600492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.600515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.600543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.600565 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.703707 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.703749 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.703761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.703781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.703792 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.705963 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.706152 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.706222 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:11 crc kubenswrapper[4821]: E0930 17:04:11.706373 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.807171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.807268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.807290 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.807314 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.807332 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.909996 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.910059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.910097 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.910117 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:11 crc kubenswrapper[4821]: I0930 17:04:11.910131 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:11Z","lastTransitionTime":"2025-09-30T17:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.012306 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.012348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.012359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.012376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.012388 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.114202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.114250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.114262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.114277 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.114289 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.216750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.216802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.216813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.216827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.216836 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.318904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.318961 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.318972 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.318986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.318996 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.421777 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.421830 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.421847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.421870 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.421889 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.524669 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.524707 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.524716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.524729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.524738 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.626847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.626885 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.626896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.626911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.626921 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.706815 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:12 crc kubenswrapper[4821]: E0930 17:04:12.706948 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.706828 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:12 crc kubenswrapper[4821]: E0930 17:04:12.707714 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.730042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.730075 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.730106 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.730124 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.730137 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.832161 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.832206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.832217 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.832233 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.832244 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.934749 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.934820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.934877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.934898 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:12 crc kubenswrapper[4821]: I0930 17:04:12.934910 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:12Z","lastTransitionTime":"2025-09-30T17:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.037504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.037564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.037583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.037609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.037630 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.139822 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.139855 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.139864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.139877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.139886 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.243921 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.243959 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.243968 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.243982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.243993 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.346308 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.346361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.346377 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.346400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.346418 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.449327 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.449859 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.449925 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.449994 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.450050 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.553270 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.553314 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.553324 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.553340 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.553353 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.656038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.656328 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.656420 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.656542 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.656623 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.706789 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.706841 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:13 crc kubenswrapper[4821]: E0930 17:04:13.707396 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:13 crc kubenswrapper[4821]: E0930 17:04:13.707483 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.707766 4821 scope.go:117] "RemoveContainer" containerID="276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.760657 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.760700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.760748 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.760811 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.760823 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.863295 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.863330 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.863339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.863352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.863362 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.966202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.966247 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.966256 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.966272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:13 crc kubenswrapper[4821]: I0930 17:04:13.966283 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:13Z","lastTransitionTime":"2025-09-30T17:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.064548 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/1.log" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.067358 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.067679 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.068442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.068475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.068489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.068503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.068514 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.081659 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.097981 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.110527 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.129054 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.145379 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.156530 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.167399 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.170294 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.170325 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.170336 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.170352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.170363 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.177489 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.193768 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.206605 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.219466 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.230787 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.247935 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.261128 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.273437 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.273477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.273486 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.273501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.273510 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.274733 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.283998 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.376131 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.376164 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.376176 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.376214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.376222 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.478567 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.479700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.479805 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.479905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.479983 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.581955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.582015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.582025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.582040 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.582052 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.684563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.684603 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.684617 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.684633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.684646 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.707042 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.707059 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:14 crc kubenswrapper[4821]: E0930 17:04:14.707176 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:14 crc kubenswrapper[4821]: E0930 17:04:14.707271 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.719285 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.731150 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.743923 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.755588 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.766753 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.779938 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.785899 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.785933 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.785941 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.785957 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.785972 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.801813 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.814238 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.828817 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.841661 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.854280 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.866177 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.877399 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.888840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.888877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.888888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.888903 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.888913 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.892541 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.904497 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.916896 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.991764 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.991814 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.991827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.991848 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:14 crc kubenswrapper[4821]: I0930 17:04:14.991865 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:14Z","lastTransitionTime":"2025-09-30T17:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.072695 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/2.log" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.073382 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/1.log" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.076594 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" exitCode=1 Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.076645 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.076687 4821 scope.go:117] "RemoveContainer" containerID="276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.078213 4821 scope.go:117] "RemoveContainer" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" Sep 30 17:04:15 crc kubenswrapper[4821]: E0930 17:04:15.078589 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.093828 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.095063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.095122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.095136 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.095157 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.095172 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.111778 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.124342 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.137454 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.150550 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.175134 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.186699 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.198119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.198191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.198202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.198250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.198264 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.203586 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.219519 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.235236 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.248563 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.260904 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.274909 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.287215 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.299934 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.300992 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.301038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.301054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.301075 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.301116 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.311471 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.404031 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.404101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.404149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.404171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.404187 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.506493 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.506543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.506552 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.506565 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.506574 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.608613 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.608644 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.608653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.608666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.608674 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.706294 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.706294 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:15 crc kubenswrapper[4821]: E0930 17:04:15.706426 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:15 crc kubenswrapper[4821]: E0930 17:04:15.706498 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.707033 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.710443 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.710473 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.710483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.710497 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.710506 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.716141 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.722220 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.732463 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.744049 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.753501 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.766297 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.775716 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.786839 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.798123 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.809570 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.812490 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.812529 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.812540 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.812555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.812567 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.822045 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.833732 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.851920 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f790bc8c4bfc0944f34147b88bebda378343926612fd54a1b94faafb773bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:03:57Z\\\",\\\"message\\\":\\\"rent time 2025-09-30T17:03:57Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:03:57.839580 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0930 17:03:57.839599 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0930 17:03:57.839614 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.836193 6156 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839621 6156 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-jpnpn in node crc\\\\nI0930 17:03:57.839630 6156 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-jpnpn after 0 failed attempt(s)\\\\nI0930 17:03:57.839635 6156 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-jpnpn\\\\nI0930 17:03:57.839634 6156 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0930 17:03:57.839605 6156 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0930 17:03:57.839646 6156 ovn.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.862673 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.875748 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.887036 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.901331 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.915221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.915254 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.915264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.915293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:15 crc kubenswrapper[4821]: I0930 17:04:15.915303 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:15Z","lastTransitionTime":"2025-09-30T17:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.016921 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.016954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.016962 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.016977 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.016990 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.081921 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/2.log" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.085348 4821 scope.go:117] "RemoveContainer" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.085496 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.096160 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.106543 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.115959 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.119030 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.119089 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.119107 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.119127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.119138 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.129196 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.139284 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.148993 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.161015 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.177888 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.188762 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.202903 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.216297 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.220905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.220941 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.220950 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.220965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.220977 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.230049 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.242738 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.253341 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.270903 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.283036 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.294559 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.323465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.323506 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.323521 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.323538 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.323586 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.426247 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.426284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.426293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.426306 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.426317 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.512326 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.512432 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:04:48.512415029 +0000 UTC m=+84.417460973 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.512583 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.512611 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.512702 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.512741 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:48.512733076 +0000 UTC m=+84.417779020 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.512771 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.512861 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:48.512842708 +0000 UTC m=+84.417888652 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.528467 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.528515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.528529 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.528546 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.528558 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.613648 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.613728 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.613747 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613858 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613883 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613916 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613928 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613950 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:04:32.613927481 +0000 UTC m=+68.518973505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613975 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:48.613962282 +0000 UTC m=+84.519008226 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613875 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.613996 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.614003 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.614034 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:04:48.614028583 +0000 UTC m=+84.519074527 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.631114 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.631162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.631172 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.631189 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.631199 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.706673 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.706720 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.706836 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:16 crc kubenswrapper[4821]: E0930 17:04:16.706910 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.733361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.733435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.733456 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.733472 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.733513 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.836507 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.836546 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.836578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.836597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.836611 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.938957 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.939018 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.939039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.939069 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:16 crc kubenswrapper[4821]: I0930 17:04:16.939099 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:16Z","lastTransitionTime":"2025-09-30T17:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.041605 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.041643 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.041653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.041667 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.041678 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.144904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.144951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.144966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.144987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.145001 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.250842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.250912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.250924 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.250951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.250963 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.353634 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.353693 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.353702 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.353716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.353725 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.455705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.455775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.455789 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.455808 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.455819 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.558142 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.558180 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.558188 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.558202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.558211 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.660313 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.660363 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.660373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.660388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.660399 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.706865 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:17 crc kubenswrapper[4821]: E0930 17:04:17.707034 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.707216 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:17 crc kubenswrapper[4821]: E0930 17:04:17.707292 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.763122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.763162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.763171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.763186 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.763196 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.865242 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.865579 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.865655 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.865740 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.865814 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.967881 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.967923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.967932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.967947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:17 crc kubenswrapper[4821]: I0930 17:04:17.967957 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:17Z","lastTransitionTime":"2025-09-30T17:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.009861 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.023713 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.037948 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.047953 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.059894 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.069953 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.070160 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.070229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.070294 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.070355 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.072143 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.089948 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.102846 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.115107 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.128279 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.140616 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.154973 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.165710 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.172942 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.172971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.172982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.172998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.173011 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.177444 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.187616 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.198011 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.208579 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.217778 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.275859 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.275906 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.275917 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.275932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.275943 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.378676 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.378708 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.378718 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.378731 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.378741 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.480618 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.480652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.480662 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.480678 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.480689 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.584003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.585152 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.585360 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.585591 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.585796 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.687973 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.688278 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.688387 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.688462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.688616 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.706318 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.706392 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:18 crc kubenswrapper[4821]: E0930 17:04:18.706630 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:18 crc kubenswrapper[4821]: E0930 17:04:18.706730 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.790905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.790954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.790966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.790983 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.790995 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.892963 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.892997 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.893007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.893023 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.893032 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.995499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.995539 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.995554 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.995571 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:18 crc kubenswrapper[4821]: I0930 17:04:18.995585 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:18Z","lastTransitionTime":"2025-09-30T17:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.098815 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.098888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.098901 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.098919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.098936 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.201812 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.202303 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.202494 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.202690 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.202846 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.304915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.305237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.305326 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.305395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.305457 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.407790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.407832 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.407843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.407859 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.407869 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.511145 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.511205 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.511218 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.511235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.511246 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.613604 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.613651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.613663 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.613679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.613690 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.706951 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.706998 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:19 crc kubenswrapper[4821]: E0930 17:04:19.707117 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:19 crc kubenswrapper[4821]: E0930 17:04:19.707248 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.716008 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.716062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.716105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.716133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.716151 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.818572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.818609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.818617 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.818631 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.818641 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.920987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.921022 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.921034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.921050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:19 crc kubenswrapper[4821]: I0930 17:04:19.921060 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:19Z","lastTransitionTime":"2025-09-30T17:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.024156 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.024197 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.024207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.024224 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.024235 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.126495 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.126529 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.126539 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.126554 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.126585 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.229011 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.229041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.229050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.229064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.229074 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.331432 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.331526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.331547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.331584 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.331601 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.433239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.433292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.433305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.433323 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.433336 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.535480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.535555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.535576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.535602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.535620 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.637593 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.637636 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.637647 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.637662 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.637672 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.706854 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.706974 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:20 crc kubenswrapper[4821]: E0930 17:04:20.707145 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:20 crc kubenswrapper[4821]: E0930 17:04:20.706999 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.739935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.739972 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.739983 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.739998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.740008 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.843123 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.843179 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.843195 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.843212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.843225 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.946777 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.946847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.946872 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.946901 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:20 crc kubenswrapper[4821]: I0930 17:04:20.946923 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:20Z","lastTransitionTime":"2025-09-30T17:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.048920 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.048968 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.048985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.049008 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.049024 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.152950 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.153006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.153025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.153049 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.153066 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.255982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.256027 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.256046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.256063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.256075 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.358519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.358567 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.358579 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.358597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.358606 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.461341 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.461391 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.461410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.461430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.461443 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.563608 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.563642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.563651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.563666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.563676 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.666137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.666248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.666274 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.666296 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.666313 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.706422 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.706579 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.706818 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.706916 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.769335 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.769383 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.769394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.769410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.769421 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.846584 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.846632 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.846641 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.846656 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.846665 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.865752 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:21Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.871802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.871840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.871850 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.871866 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.871877 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.885847 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:21Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.890583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.890640 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.890659 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.890682 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.890699 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.941656 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:21Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.946994 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.947058 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.947073 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.947127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.947141 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.961480 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:21Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.966172 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.966219 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.966230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.966247 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.966259 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.981255 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:21Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:21 crc kubenswrapper[4821]: E0930 17:04:21.981408 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.982935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.982966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.982976 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.982992 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:21 crc kubenswrapper[4821]: I0930 17:04:21.983004 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:21Z","lastTransitionTime":"2025-09-30T17:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.086054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.086146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.086164 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.086211 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.086227 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.188451 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.188510 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.188524 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.188543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.188557 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.291146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.291189 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.291200 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.291215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.291228 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.393781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.393819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.393828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.393844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.393853 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.497233 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.497280 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.497290 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.497306 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.497321 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.599999 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.600059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.600071 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.600113 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.600128 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.703301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.703341 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.703358 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.703376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.703390 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.706991 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:22 crc kubenswrapper[4821]: E0930 17:04:22.707218 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.707354 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:22 crc kubenswrapper[4821]: E0930 17:04:22.707539 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.805976 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.806012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.806021 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.806035 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.806045 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.908839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.908874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.908884 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.908896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:22 crc kubenswrapper[4821]: I0930 17:04:22.908906 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:22Z","lastTransitionTime":"2025-09-30T17:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.011691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.011735 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.011745 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.011760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.011770 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.114430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.114515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.114534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.114567 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.114592 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.217320 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.217359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.217374 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.217388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.217397 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.320971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.321285 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.321351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.321446 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.321545 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.425112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.425426 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.425533 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.425617 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.425694 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.529072 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.529132 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.529143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.529157 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.529167 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.632162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.632207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.632217 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.632232 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.632243 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.706749 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:23 crc kubenswrapper[4821]: E0930 17:04:23.707725 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.706847 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:23 crc kubenswrapper[4821]: E0930 17:04:23.707980 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.738698 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.738789 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.738813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.738847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.738875 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.843127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.843481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.843657 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.843758 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.843848 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.947012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.947305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.947418 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.947527 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:23 crc kubenswrapper[4821]: I0930 17:04:23.947628 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:23Z","lastTransitionTime":"2025-09-30T17:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.050380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.050436 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.050447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.050544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.050561 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.154004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.154336 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.154400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.154485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.154560 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.257196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.257244 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.257256 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.257273 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.257284 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.359914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.359948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.359956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.359974 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.359985 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.477839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.478119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.478223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.478308 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.478401 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.580851 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.580892 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.580906 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.580923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.580935 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.682791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.682823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.682832 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.682847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.682858 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.706329 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.706366 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:24 crc kubenswrapper[4821]: E0930 17:04:24.706444 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:24 crc kubenswrapper[4821]: E0930 17:04:24.706568 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.721864 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.745776 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.760117 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.775748 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.786734 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.786763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.786773 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.786786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.786796 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.799977 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.812493 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.829575 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.844162 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.857425 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.871814 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.885368 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.888940 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.888971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.888984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.889004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.889015 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.896370 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.906972 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.917861 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.928105 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.937289 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.948137 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.990995 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.991039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.991048 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.991064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:24 crc kubenswrapper[4821]: I0930 17:04:24.991075 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:24Z","lastTransitionTime":"2025-09-30T17:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.093955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.093994 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.094006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.094022 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.094033 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.196292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.196368 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.196380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.196397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.196408 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.299213 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.299707 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.299716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.299732 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.299741 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.401958 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.402015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.402029 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.402058 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.402071 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.505291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.505324 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.505332 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.505345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.505354 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.609385 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.609426 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.609435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.609451 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.609462 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.706845 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.706892 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:25 crc kubenswrapper[4821]: E0930 17:04:25.706982 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:25 crc kubenswrapper[4821]: E0930 17:04:25.707230 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.712307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.712339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.712349 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.712362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.712373 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.815161 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.815206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.815217 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.815233 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.815243 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.917764 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.917813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.917823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.917840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:25 crc kubenswrapper[4821]: I0930 17:04:25.917849 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:25Z","lastTransitionTime":"2025-09-30T17:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.021429 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.021503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.021514 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.021553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.021567 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.124556 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.124602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.124613 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.124632 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.124645 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.227677 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.227735 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.227751 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.227774 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.227794 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.332078 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.332131 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.332145 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.332162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.332172 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.434460 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.434499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.434510 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.434529 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.434553 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.536918 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.536977 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.536990 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.537010 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.537021 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.639231 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.639273 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.639287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.639304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.639315 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.706939 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.707039 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:26 crc kubenswrapper[4821]: E0930 17:04:26.707118 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:26 crc kubenswrapper[4821]: E0930 17:04:26.707238 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.741435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.741476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.741487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.741503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.741515 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.844523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.844639 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.844660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.844679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.844691 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.948651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.948682 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.948691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.948706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:26 crc kubenswrapper[4821]: I0930 17:04:26.948714 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:26Z","lastTransitionTime":"2025-09-30T17:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.051255 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.051307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.051318 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.051335 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.051347 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.153220 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.153284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.153301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.153324 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.153343 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.255988 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.256027 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.256038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.256053 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.256064 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.358782 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.358829 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.358838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.358855 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.358866 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.460710 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.460737 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.460746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.460759 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.460769 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.563292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.563362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.563375 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.563393 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.563786 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.667058 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.667131 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.667144 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.667160 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.667171 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.706459 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:27 crc kubenswrapper[4821]: E0930 17:04:27.706608 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.706820 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:27 crc kubenswrapper[4821]: E0930 17:04:27.706890 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.769502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.769537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.769548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.769563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.769575 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.873159 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.873214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.873227 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.873250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.873262 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.975929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.975963 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.975971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.975984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:27 crc kubenswrapper[4821]: I0930 17:04:27.975994 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:27Z","lastTransitionTime":"2025-09-30T17:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.084636 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.084678 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.084689 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.084712 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.084726 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.187817 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.187889 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.187905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.187926 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.187941 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.290519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.290572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.290583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.290599 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.290608 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.393650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.393704 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.393715 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.393736 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.393747 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.496230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.496275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.496284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.496300 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.496310 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.598099 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.598133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.598141 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.598155 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.598164 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.700348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.700382 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.700390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.700404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.700413 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.706650 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.706682 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:28 crc kubenswrapper[4821]: E0930 17:04:28.706758 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:28 crc kubenswrapper[4821]: E0930 17:04:28.706835 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.802993 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.803029 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.803039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.803054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.803065 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.905124 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.905171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.905183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.905202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:28 crc kubenswrapper[4821]: I0930 17:04:28.905211 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:28Z","lastTransitionTime":"2025-09-30T17:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.007435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.007468 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.007477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.007511 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.007521 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.110203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.110239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.110249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.110265 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.110274 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.212620 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.212919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.212936 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.212951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.212961 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.315747 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.315791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.315800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.315819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.315831 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.418611 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.418688 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.418724 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.418742 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.418756 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.521786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.521827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.521835 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.521849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.521858 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.624645 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.624693 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.624704 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.624721 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.624737 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.706779 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.706797 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:29 crc kubenswrapper[4821]: E0930 17:04:29.707251 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:29 crc kubenswrapper[4821]: E0930 17:04:29.707435 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.707559 4821 scope.go:117] "RemoveContainer" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" Sep 30 17:04:29 crc kubenswrapper[4821]: E0930 17:04:29.707730 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.727237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.727280 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.727291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.727307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.727320 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.829487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.829525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.829534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.829548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.829558 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.933768 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.933816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.933826 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.933842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:29 crc kubenswrapper[4821]: I0930 17:04:29.933852 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:29Z","lastTransitionTime":"2025-09-30T17:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.036140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.036173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.036185 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.036200 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.036212 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.138577 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.138625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.138636 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.138651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.138664 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.240651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.240685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.240694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.240710 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.240721 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.343154 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.343227 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.343239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.343256 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.343267 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.446352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.446409 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.446420 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.446439 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.446451 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.549325 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.549380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.549391 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.549413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.549426 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.651951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.652011 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.652033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.652061 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.652110 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.706836 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.706923 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:30 crc kubenswrapper[4821]: E0930 17:04:30.707030 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:30 crc kubenswrapper[4821]: E0930 17:04:30.707160 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.755040 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.755193 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.755214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.755245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.755267 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.857491 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.857570 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.857583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.857607 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.857621 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.959819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.959890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.959902 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.959918 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:30 crc kubenswrapper[4821]: I0930 17:04:30.959932 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:30Z","lastTransitionTime":"2025-09-30T17:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.063673 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.063755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.063795 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.063816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.063829 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.166053 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.166118 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.166131 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.166146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.166158 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.268756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.268843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.268881 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.268905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.268919 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.371290 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.371373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.371389 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.371416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.371430 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.473648 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.474012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.474164 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.474265 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.474337 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.577062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.577472 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.577544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.577622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.577701 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.680751 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.681137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.681235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.681312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.681388 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.706474 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.706547 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:31 crc kubenswrapper[4821]: E0930 17:04:31.706613 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:31 crc kubenswrapper[4821]: E0930 17:04:31.706783 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.783956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.784011 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.784022 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.784039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.784053 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.886777 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.887121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.887194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.887260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.887322 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.990026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.990590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.990673 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.990756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:31 crc kubenswrapper[4821]: I0930 17:04:31.990829 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:31Z","lastTransitionTime":"2025-09-30T17:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.093315 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.093396 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.093408 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.093424 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.093438 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.145310 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.145352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.145362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.145377 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.145386 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.156972 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:32Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.160548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.160594 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.160606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.160621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.160630 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.171534 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:32Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.178216 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.178262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.178272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.178288 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.178299 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.188918 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:32Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.192292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.192418 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.192509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.192586 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.192669 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.204904 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:32Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.208366 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.208400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.208410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.208424 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.208434 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.219424 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:32Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.219547 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.221380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.221414 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.221425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.221441 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.221453 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.324241 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.324318 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.324329 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.324347 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.324358 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.427030 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.427410 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.427490 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.427574 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.427651 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.530293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.530563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.530643 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.530722 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.530815 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.632771 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.632809 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.632818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.632830 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.632842 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.680594 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.680710 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.680762 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:05:04.680748439 +0000 UTC m=+100.585794383 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.706029 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.706044 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.706231 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:32 crc kubenswrapper[4821]: E0930 17:04:32.706396 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.734602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.734637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.734649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.734665 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.734677 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.837362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.837433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.837451 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.837475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.837497 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.940105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.940380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.940485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.940575 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:32 crc kubenswrapper[4821]: I0930 17:04:32.940644 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:32Z","lastTransitionTime":"2025-09-30T17:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.042562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.043111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.043195 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.043275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.043335 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.145718 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.145763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.145775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.145791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.145802 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.248595 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.248648 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.248660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.248677 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.248693 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.351836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.352136 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.352214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.352287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.352355 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.454786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.454869 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.454908 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.454934 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.454961 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.557863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.557915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.557929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.557954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.557970 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.660857 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.660908 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.660920 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.660939 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.660952 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.706648 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.706704 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:33 crc kubenswrapper[4821]: E0930 17:04:33.706772 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:33 crc kubenswrapper[4821]: E0930 17:04:33.706917 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.718215 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.763616 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.763652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.763661 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.763681 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.763691 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.866103 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.866140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.866151 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.866166 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.866175 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.968177 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.968215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.968226 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.968242 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:33 crc kubenswrapper[4821]: I0930 17:04:33.968254 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:33Z","lastTransitionTime":"2025-09-30T17:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.071137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.071208 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.071218 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.071235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.071248 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.174649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.174966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.175119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.175220 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.175303 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.277563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.277614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.277625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.277650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.277664 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.379848 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.379899 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.379913 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.379934 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.379948 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.482705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.483000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.483146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.483240 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.483316 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.585613 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.585887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.585948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.586014 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.586073 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.689598 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.689643 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.689657 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.689678 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.689693 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.706514 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.706615 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:34 crc kubenswrapper[4821]: E0930 17:04:34.706876 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:34 crc kubenswrapper[4821]: E0930 17:04:34.706948 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.719387 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.732069 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.745252 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.757546 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.769456 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.781433 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.791500 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.791537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.791547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.791562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.791573 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.794060 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.810468 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.822675 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.837497 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.855241 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.868654 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.880258 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.892579 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.893756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.893811 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.893822 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.893836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.893845 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.915724 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.929220 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.943624 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.957672 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:34Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.996525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.996555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.996564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.996576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:34 crc kubenswrapper[4821]: I0930 17:04:34.996586 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:34Z","lastTransitionTime":"2025-09-30T17:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.099194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.099223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.099235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.099249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.099260 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.143050 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/0.log" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.143124 4821 generic.go:334] "Generic (PLEG): container finished" podID="c84981f2-eb86-4d0d-9322-db1b62feeac8" containerID="9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd" exitCode=1 Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.143166 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerDied","Data":"9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.143617 4821 scope.go:117] "RemoveContainer" containerID="9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.161148 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.181204 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.195700 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.201329 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.201352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.201360 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.201374 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.201383 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.215256 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.226679 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.240132 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.255335 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.270340 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.283521 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.296736 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.304141 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.304181 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.304191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.304206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.304219 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.312602 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.325637 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.338673 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.349819 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.363654 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.371315 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.382188 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.395293 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:35Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.407602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.407649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.407668 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.407700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.407718 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.511880 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.511930 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.511948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.511973 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.511990 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.614222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.614399 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.614564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.614727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.614866 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.706264 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:35 crc kubenswrapper[4821]: E0930 17:04:35.706652 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.706909 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:35 crc kubenswrapper[4821]: E0930 17:04:35.707037 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.717012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.717051 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.717062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.717092 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.717103 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.819640 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.819694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.819704 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.819720 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.819730 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.922948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.922999 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.923018 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.923043 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:35 crc kubenswrapper[4821]: I0930 17:04:35.923060 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:35Z","lastTransitionTime":"2025-09-30T17:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.024915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.024947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.024956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.024970 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.024980 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.127440 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.127489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.127499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.127519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.127531 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.149693 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/0.log" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.149760 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerStarted","Data":"bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.162865 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.173494 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.186563 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.201302 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.211995 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.228359 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.229953 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.229980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.229993 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.230010 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.230021 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.239802 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.251576 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.264773 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.278856 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.331281 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.332756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.332800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.332813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.332828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.332838 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.352838 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.383907 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.400454 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.414612 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.429585 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.434546 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.434592 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.434602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.434616 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.434625 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.443526 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.455966 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:36Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.536423 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.536461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.536471 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.536486 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.536497 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.638912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.638954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.638965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.638984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.638995 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.707147 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.707169 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:36 crc kubenswrapper[4821]: E0930 17:04:36.707275 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:36 crc kubenswrapper[4821]: E0930 17:04:36.707390 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.741542 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.741578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.741588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.741601 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.741611 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.844975 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.845042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.845055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.845105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.845128 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.948046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.948102 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.948111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.948127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:36 crc kubenswrapper[4821]: I0930 17:04:36.948137 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:36Z","lastTransitionTime":"2025-09-30T17:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.050448 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.050494 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.050506 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.050522 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.050532 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.153646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.153689 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.153698 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.153715 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.153733 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.256101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.256143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.256153 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.256167 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.256178 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.358666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.358697 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.358706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.358720 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.358730 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.462133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.462183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.462203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.462224 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.462242 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.565025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.565117 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.565137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.565163 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.565180 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.667853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.667895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.667904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.667919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.667930 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.706357 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:37 crc kubenswrapper[4821]: E0930 17:04:37.706462 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.706360 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:37 crc kubenswrapper[4821]: E0930 17:04:37.706639 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.770144 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.770168 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.770179 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.770191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.770200 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.872909 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.872944 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.872952 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.872968 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.872978 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.975724 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.975862 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.975888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.975919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:37 crc kubenswrapper[4821]: I0930 17:04:37.975943 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:37Z","lastTransitionTime":"2025-09-30T17:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.077900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.077943 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.077967 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.077985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.077998 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.179777 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.179805 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.179813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.179826 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.179835 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.281793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.281827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.281836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.281850 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.281861 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.385292 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.385345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.385364 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.385387 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.385406 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.488234 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.488321 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.488342 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.488373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.488394 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.590849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.590923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.590948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.590985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.591012 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.693043 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.693132 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.693153 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.693178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.693196 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.707261 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.707347 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:38 crc kubenswrapper[4821]: E0930 17:04:38.707474 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:38 crc kubenswrapper[4821]: E0930 17:04:38.707623 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.796335 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.796395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.796409 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.796428 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.796442 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.899175 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.899212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.899222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.899237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:38 crc kubenswrapper[4821]: I0930 17:04:38.899248 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:38Z","lastTransitionTime":"2025-09-30T17:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.002835 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.002887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.002902 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.002922 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.002933 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.105865 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.105912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.105925 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.105945 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.105958 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.211783 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.211852 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.211870 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.211896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.211915 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.315721 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.315788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.315806 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.315832 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.315849 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.418929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.418971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.418981 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.419000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.419013 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.524035 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.524137 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.524158 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.524184 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.524203 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.627821 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.627912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.627937 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.627969 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.627991 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.706038 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.706470 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:39 crc kubenswrapper[4821]: E0930 17:04:39.706660 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:39 crc kubenswrapper[4821]: E0930 17:04:39.706803 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.730519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.730598 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.730619 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.730647 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.730664 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.833965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.834033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.834053 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.834113 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.834132 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.936964 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.937023 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.937039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.937062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:39 crc kubenswrapper[4821]: I0930 17:04:39.937099 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:39Z","lastTransitionTime":"2025-09-30T17:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.040371 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.040429 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.040446 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.040476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.040495 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.144363 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.144450 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.144472 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.144506 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.144529 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.248691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.248767 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.248786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.248813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.248840 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.351933 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.351989 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.352007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.352034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.352053 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.454497 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.454566 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.454590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.454626 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.454652 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.558819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.558891 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.558914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.558944 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.558966 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.661538 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.661606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.661624 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.661649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.661666 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.706641 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.706692 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:40 crc kubenswrapper[4821]: E0930 17:04:40.706870 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:40 crc kubenswrapper[4821]: E0930 17:04:40.707060 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.709616 4821 scope.go:117] "RemoveContainer" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.764548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.764601 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.764611 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.764625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.764637 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.867909 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.867980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.868000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.868028 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.868048 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.970377 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.970445 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.970465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.970490 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:40 crc kubenswrapper[4821]: I0930 17:04:40.970506 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:40Z","lastTransitionTime":"2025-09-30T17:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.072798 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.072847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.072860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.072880 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.072892 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.168709 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/2.log" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.179045 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.179160 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.179182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.179211 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.179231 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.180648 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.181880 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.206743 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.239243 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.258192 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.275843 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.281970 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.282048 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.282062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.282096 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.282106 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.300768 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.327260 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.338339 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.350615 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.361191 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.373963 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.384828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.384874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.384890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.384907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.384921 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.388141 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.399641 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.413074 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.427536 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.440614 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.454441 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.467139 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.477535 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.487668 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.487794 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.487890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.487974 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.488035 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.591660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.591986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.592119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.592204 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.592272 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.694491 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.694525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.694534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.694548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.694557 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.706941 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:41 crc kubenswrapper[4821]: E0930 17:04:41.707042 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.706941 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:41 crc kubenswrapper[4821]: E0930 17:04:41.707268 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.797374 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.797415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.797430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.797445 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.797457 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.899622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.899690 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.899701 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.899720 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:41 crc kubenswrapper[4821]: I0930 17:04:41.899733 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:41Z","lastTransitionTime":"2025-09-30T17:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.002295 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.002352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.002364 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.002388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.002402 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.105531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.105599 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.105613 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.105641 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.105656 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.186428 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/3.log" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.187522 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/2.log" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.191797 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" exitCode=1 Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.191848 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.191955 4821 scope.go:117] "RemoveContainer" containerID="54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.193708 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.194238 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.208842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.208894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.208913 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.208936 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.208952 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.210663 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.228412 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.245757 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.262672 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.276064 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.289689 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.306290 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.318285 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.318381 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.318394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.318420 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.318437 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.325145 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.342005 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.369494 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.395193 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.417685 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.421993 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.422046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.422066 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.422127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.422146 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.440657 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.466021 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a85abaf14dff92f233b7496593399569718a56285e6ad12c12bff2cd61edf0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:14Z\\\",\\\"message\\\":\\\"ult_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:04:14.507124 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} w\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:41Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:04:41.639758 6739 services_controller.go:434] Service openshift-machine-api/control-plane-machine-set-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api ffd0ef27-d28d-43cc-90c8-0e8843e4c04c 4409 0 2025-02-23 05:12:21 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006ecd407 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.481159 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.494364 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.508551 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526252 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526286 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526299 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.526610 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.535733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.535757 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.535765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.535775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.535784 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.555728 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.561652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.561740 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.561761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.561792 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.561813 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.576423 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.581864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.581935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.581958 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.581985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.582005 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.598390 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.604025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.604134 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.604156 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.604183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.604202 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.619759 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.624689 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.624746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.624764 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.624795 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.624821 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.638493 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:42Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.638628 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.640790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.640834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.640845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.640874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.640887 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.706244 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.706291 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.706528 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:42 crc kubenswrapper[4821]: E0930 17:04:42.706662 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.744039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.744109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.744120 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.744142 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.744156 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.847234 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.847284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.847297 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.847319 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.847334 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.950962 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.951077 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.951123 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.951150 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:42 crc kubenswrapper[4821]: I0930 17:04:42.951166 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:42Z","lastTransitionTime":"2025-09-30T17:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.054566 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.054661 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.054689 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.054726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.054749 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.159287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.159354 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.159371 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.159400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.159418 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.207611 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/3.log" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.214334 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:04:43 crc kubenswrapper[4821]: E0930 17:04:43.214967 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.237224 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.261574 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.264621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.264670 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.264685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.265442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.265459 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.282752 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.295224 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.313188 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.326578 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.340155 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.355537 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.367922 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.368285 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.368317 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.368332 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.368347 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.368357 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.379341 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.397982 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.415156 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.429687 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.444250 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.457490 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.469761 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.471987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.472070 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.472129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.472148 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.472161 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.492908 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:41Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:04:41.639758 6739 services_controller.go:434] Service openshift-machine-api/control-plane-machine-set-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api ffd0ef27-d28d-43cc-90c8-0e8843e4c04c 4409 0 2025-02-23 05:12:21 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006ecd407 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.505920 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.575395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.575462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.575480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.575509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.575525 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.678388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.678435 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.678444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.678462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.678472 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.706200 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.706272 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:43 crc kubenswrapper[4821]: E0930 17:04:43.706339 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:43 crc kubenswrapper[4821]: E0930 17:04:43.706535 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.781621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.781676 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.781692 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.781707 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.781718 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.885262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.885326 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.885339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.885365 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.885383 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.989711 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.989785 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.989802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.989821 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:43 crc kubenswrapper[4821]: I0930 17:04:43.989833 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:43Z","lastTransitionTime":"2025-09-30T17:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.092138 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.092201 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.092211 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.092227 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.092241 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.195769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.195820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.195839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.195863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.195879 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.299122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.299225 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.299255 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.299332 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.299357 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.402917 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.402966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.402982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.403006 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.403022 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.505340 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.505376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.505387 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.505402 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.505413 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.607931 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.607988 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.608004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.608025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.608041 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.706752 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.706779 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:44 crc kubenswrapper[4821]: E0930 17:04:44.706892 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:44 crc kubenswrapper[4821]: E0930 17:04:44.706931 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.710799 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.710890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.710929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.710945 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.710957 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.722111 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9sjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c84981f2-eb86-4d0d-9322-db1b62feeac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:34Z\\\",\\\"message\\\":\\\"2025-09-30T17:03:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab\\\\n2025-09-30T17:03:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a2c0467-0fb1-4f7d-9f5b-c0e665d265ab to /host/opt/cni/bin/\\\\n2025-09-30T17:03:49Z [verbose] multus-daemon started\\\\n2025-09-30T17:03:49Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:04:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xwk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9sjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.735535 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djdqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:04:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkvtw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.749628 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3bd3e0-5235-416c-bab2-f2b60bab29cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9d50af49f77546ea6f04474dd222a4b9ee7d29f799ee20fad03c5ddf8e0b43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c51f7ec47dab237eb69ee8ea74794f461ba42a02a6f2de5787c0a3bc972313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43f233324c870ea4a282464cb1d7a96b75d314a73d505fe47373c4269f1daeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf28842a6b40581032806bee8507201e74dac6ef16e4cba7573389672be0c37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.761211 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc9a3823-19e2-412d-a305-b847467f940a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4fba12f3da3af2b564ce19ef6a7306b5e4a6e41d35c2dd21d97df93c4113fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8008ebe7732b90014ee2dbb9e7b9cdf8e7a339a62e38e6c31e1ec7f6ca9ca7e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.773618 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1490d367ca18a45fc9e4ea90d7ac54bb8e5883d4b2fab0bf281407ff37d3657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.787748 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hbtzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80792f77-19d5-48f4-b3ea-5d53f770cb33\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64bbffd2fa8cfc663d6c180af60a163a7683066e671df2cf4c324b631009ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5zq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hbtzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.800541 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7f730c5-27f4-4341-bda1-6d80a934c8a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc051844bddc6c02765489dee126fa8bdb6a5d11da56c0a53bb98e66e60e5bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef37bde42a138d4baba955932ff8ab52b89163ebe785cf778d7cd1bd397ccbd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e54b4155e2016eda97234667c9047be5c828a39c176c0f8eba33173598354\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79c8558a3156070e6fcf5c3bcabc43dfbc3e9575961e6cf776542d8f1fbe6fb8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813201 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813243 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813256 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.813981 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22c1c4c3-b314-4e7b-9f4d-e83a4c9c91b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c8193f74cb6e485a4d20caa7f9ba2691f8d1ac83503ae9af306c7efa4d33c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6764e38e49e4479b229fe1e3e81794532d69631d203e2f65abd50ca745af0c79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ccb648ace62daee51a1c76185538f3f98608072bdb11ca270331ad500bac91d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e328e723ef53fce610e29a32688cbc81a7cde9117de9fbe57fc0cbf4985c224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5423d5692ded16b02086fac931b41d58891bf7b29f9fd63f4d916f7c4945ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9ca8be087568cbc66273f77c72de02c860317988aa7c82d8b497a165d39577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://854de794edff4f9480f698d70f32b5775a9ca72b0c902164a7f9eaecfb0720ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kbggh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jpnpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.824897 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55rq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13b69688-7cac-4423-b4e1-553755af1baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca1fffcfa3c00155f7cd2e174872c2ea5baf5a8e4c111f482ac2f43736645df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgvmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55rq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.834776 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.845954 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.862823 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:04:41Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:41Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:04:41.639758 6739 services_controller.go:434] Service openshift-machine-api/control-plane-machine-set-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api ffd0ef27-d28d-43cc-90c8-0e8843e4c04c 4409 0 2025-02-23 05:12:21 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006ecd407 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:04:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k7m5w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.874166 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ed48c9a-6f81-43be-9b63-906ab51dc67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12383b2a87a569574eb67b31e8fed56fcf4ed8ee981714de224d0336ce782c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2234f5a0ce398fbf4075fa759a154ccf0ac4175f55c41e0c21eac73ea23efa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rdm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9dvx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.888005 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b2394cd-599d-4c51-8cb4-3c2cfbe067a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33df3b6c219ad7d6d3c33a67842ee6e7857c10aa0b590086dc38d567e24d73e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://234d14d76e4baefd2068fd949d3f85d1a0cd8849825fc261ac79d9f5cf6af533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc47592f4ebec8e4d505f2403e07adb10543537a187efd2be095b04ec87ce98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc042ce02785326f5b2c0316774ffba46609cf5983d731f84580109ec437611b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb540b45f14fb67b68d186f4ac99aeb8cd819b64a7a5751273b953043281a7c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:03:44.169718 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:03:44.170188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:03:44.173193 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2447113479/tls.crt::/tmp/serving-cert-2447113479/tls.key\\\\\\\"\\\\nI0930 17:03:44.503771 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:03:44.509550 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:03:44.509576 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:03:44.509599 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:03:44.509605 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:03:44.519451 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:03:44.519487 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:03:44.519498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:03:44.519501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:03:44.519504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:03:44.519508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:03:44.519533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:03:44.522165 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9841b61828d0ca1da6fccb1623f0b6077acd5ef2e0e1699083fb7695a233b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9357c51b7a249d395fbf9ff559a2db86a1cde2dabf763653941202ddfd57835\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:03:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:24Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.905666 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f47b4d923979a469e6a8747c13390e93a696e638a25d825693fd0ba65aadbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.917168 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.917218 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.917230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.917258 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.917270 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:44Z","lastTransitionTime":"2025-09-30T17:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.918840 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.935300 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6909c6906cc536d18f883d197ca035b4de3ff42bcc24b3c9e72ec37baeb9840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca8171574f556239c28b9d125ec1a40c9fcae9e0d191f70f7063eed77a2036b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:44 crc kubenswrapper[4821]: I0930 17:04:44.948903 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c2ce348-eadc-4629-a03f-fb8924b5b434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:03:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58b655003a776fedee2357f3990670ce4cfce738a723abea67fc0c8df7459db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:03:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md5fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:03:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q2xpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:44Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.021230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.021276 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.021287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.021301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.021310 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.123963 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.124067 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.124102 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.124118 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.124130 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.226183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.226223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.226233 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.226247 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.226259 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.328562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.328599 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.328609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.328624 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.328634 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.430580 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.430634 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.430645 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.430659 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.430669 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.533275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.533333 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.533350 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.533373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.533392 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.637508 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.637603 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.637626 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.637650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.637668 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.706407 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.706448 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:45 crc kubenswrapper[4821]: E0930 17:04:45.706519 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:45 crc kubenswrapper[4821]: E0930 17:04:45.706650 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.740312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.740368 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.740385 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.740409 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.740429 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.843864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.843927 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.843940 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.843957 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.843968 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.946578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.946635 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.946645 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.946660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:45 crc kubenswrapper[4821]: I0930 17:04:45.946669 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:45Z","lastTransitionTime":"2025-09-30T17:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.049693 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.049740 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.049758 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.049780 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.049797 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.152812 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.152880 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.152901 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.152929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.152951 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.259972 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.260034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.260054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.260111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.260154 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.363912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.364017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.364069 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.364147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.364166 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.483738 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.483790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.483992 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.484017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.484029 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.587386 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.587441 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.587461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.587485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.587502 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.690508 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.690950 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.690971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.690997 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.691015 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.706052 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.706266 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:46 crc kubenswrapper[4821]: E0930 17:04:46.706430 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:46 crc kubenswrapper[4821]: E0930 17:04:46.706637 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.793652 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.793706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.793723 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.793747 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.793763 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.897180 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.897245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.897263 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.897289 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.897317 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.999449 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.999537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.999569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.999603 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:46 crc kubenswrapper[4821]: I0930 17:04:46.999623 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:46Z","lastTransitionTime":"2025-09-30T17:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.102271 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.102336 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.102354 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.102375 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.102391 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.204836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.204876 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.204892 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.204914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.204933 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.308019 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.308075 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.308113 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.308129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.308137 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.410349 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.410393 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.410407 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.410425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.410438 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.513477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.513532 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.513549 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.513569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.513583 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.617711 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.617757 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.617769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.617786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.617799 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.706539 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.706544 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:47 crc kubenswrapper[4821]: E0930 17:04:47.706728 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:47 crc kubenswrapper[4821]: E0930 17:04:47.706877 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.721179 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.721243 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.721260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.721284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.721303 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.723370 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.824156 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.824189 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.824199 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.824216 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.824228 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.927403 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.927457 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.927481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.927509 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:47 crc kubenswrapper[4821]: I0930 17:04:47.927532 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:47Z","lastTransitionTime":"2025-09-30T17:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.030417 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.030465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.030480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.030500 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.030514 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.132907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.133337 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.133462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.133585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.133699 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.235460 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.235781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.235910 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.236017 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.236157 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.339880 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.340161 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.340235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.341607 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.341727 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.445136 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.445636 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.445730 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.445818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.445891 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.549551 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.549681 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.549735 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.549761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.549810 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.577263 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.577419 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.577391232 +0000 UTC m=+148.482437216 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.577471 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.577557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.577672 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.577736 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.57771988 +0000 UTC m=+148.482765854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.577987 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.578094 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.578069938 +0000 UTC m=+148.483115882 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.651861 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.651928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.651952 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.651985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.652011 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.678751 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.678826 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679064 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679137 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679166 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679252 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.679222898 +0000 UTC m=+148.584268882 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679401 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679445 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679465 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.679560 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.679527465 +0000 UTC m=+148.584573449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.707053 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.707149 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.707316 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:48 crc kubenswrapper[4821]: E0930 17:04:48.707424 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.754793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.754853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.754871 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.754896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.754916 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.857698 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.857748 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.857761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.857777 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.857786 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.960433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.960474 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.960486 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.960500 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:48 crc kubenswrapper[4821]: I0930 17:04:48.960509 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:48Z","lastTransitionTime":"2025-09-30T17:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.063765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.063829 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.063852 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.063882 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.063903 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.166322 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.166351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.166359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.166372 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.166380 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.268184 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.268217 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.268226 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.268240 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.268249 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.370049 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.370105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.370119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.370133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.370144 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.472532 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.472601 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.472625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.472659 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.472678 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.576210 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.576414 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.576588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.576688 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.576768 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.679884 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.680207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.680346 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.680461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.680588 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.706520 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.706554 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:49 crc kubenswrapper[4821]: E0930 17:04:49.706961 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:49 crc kubenswrapper[4821]: E0930 17:04:49.707132 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.782985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.783055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.783067 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.783104 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.783117 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.885569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.885615 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.885627 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.885642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.885654 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.989548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.989649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.989733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.989754 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:49 crc kubenswrapper[4821]: I0930 17:04:49.989767 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:49Z","lastTransitionTime":"2025-09-30T17:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.092583 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.092621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.092637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.092653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.092664 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.197741 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.198168 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.198379 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.198539 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.198678 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.302776 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.302824 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.302833 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.302847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.302857 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.405604 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.405638 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.405647 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.405660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.405670 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.507699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.507762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.507779 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.507795 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.507805 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.611149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.611458 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.611556 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.611621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.611695 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.707842 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:50 crc kubenswrapper[4821]: E0930 17:04:50.707945 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.708154 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:50 crc kubenswrapper[4821]: E0930 17:04:50.708218 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.713505 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.713564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.713577 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.713591 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.713602 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.816111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.816146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.816154 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.816167 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.816175 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.917874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.918121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.918191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.918343 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:50 crc kubenswrapper[4821]: I0930 17:04:50.918404 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:50Z","lastTransitionTime":"2025-09-30T17:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.021173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.021440 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.021586 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.021685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.021779 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.124011 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.124047 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.124055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.124129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.124140 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.227145 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.227210 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.227222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.227239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.227258 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.330682 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.330987 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.331066 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.331164 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.331228 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.434119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.434173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.434190 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.434212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.434227 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.536650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.537348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.537397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.537433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.537459 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.640796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.640877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.640899 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.640923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.640937 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.706844 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.706852 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:51 crc kubenswrapper[4821]: E0930 17:04:51.707143 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:51 crc kubenswrapper[4821]: E0930 17:04:51.707178 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.744111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.744151 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.744173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.744194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.744206 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.846553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.846622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.846635 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.846651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.846663 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.948269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.948310 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.948321 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.948337 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:51 crc kubenswrapper[4821]: I0930 17:04:51.948348 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:51Z","lastTransitionTime":"2025-09-30T17:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.050411 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.050442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.050452 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.050467 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.050476 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.152948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.153018 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.153041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.153070 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.153142 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.255651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.255705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.255726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.255750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.255766 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.358494 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.358553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.358564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.358580 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.358591 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.461960 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.462039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.462061 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.462112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.462130 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.564750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.564791 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.564801 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.564815 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.564825 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.667922 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.668022 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.668042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.668132 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.668152 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.706793 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.706807 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:52 crc kubenswrapper[4821]: E0930 17:04:52.707138 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:52 crc kubenswrapper[4821]: E0930 17:04:52.707248 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.770667 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.770715 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.770725 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.770742 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.770753 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.810444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.810485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.810496 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.810513 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.810523 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: E0930 17:04:52.834970 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2052ba73-7f50-4844-a6ef-43008c5ca24e\\\",\\\"systemUUID\\\":\\\"3c12aacb-94c6-4a5c-b29c-6c2e5c30c341\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:04:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.839441 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.839504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.839515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.839536 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.839545 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:04:52Z","lastTransitionTime":"2025-09-30T17:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.891029 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255"] Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.891727 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.893754 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.893949 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.894231 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.894243 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.945039 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.945020941 podStartE2EDuration="1m8.945020941s" podCreationTimestamp="2025-09-30 17:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:52.944964529 +0000 UTC m=+88.850010513" watchObservedRunningTime="2025-09-30 17:04:52.945020941 +0000 UTC m=+88.850066885" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.945292 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.945286837 podStartE2EDuration="5.945286837s" podCreationTimestamp="2025-09-30 17:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:52.929594821 +0000 UTC m=+88.834640775" watchObservedRunningTime="2025-09-30 17:04:52.945286837 +0000 UTC m=+88.850332781" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.981680 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jpnpn" podStartSLOduration=66.981654695 podStartE2EDuration="1m6.981654695s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:52.966262986 +0000 UTC m=+88.871308950" watchObservedRunningTime="2025-09-30 17:04:52.981654695 +0000 UTC m=+88.886700679" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.997826 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-55rq2" podStartSLOduration=67.997805973 podStartE2EDuration="1m7.997805973s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:52.981776098 +0000 UTC m=+88.886822042" watchObservedRunningTime="2025-09-30 17:04:52.997805973 +0000 UTC m=+88.902851917" Sep 30 17:04:52 crc kubenswrapper[4821]: I0930 17:04:52.998281 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9dvx5" podStartSLOduration=66.998276873 podStartE2EDuration="1m6.998276873s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:52.997899215 +0000 UTC m=+88.902945179" watchObservedRunningTime="2025-09-30 17:04:52.998276873 +0000 UTC m=+88.903322817" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.026010 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.026211 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.026264 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/124cc06e-25c8-472d-8e36-5a411be55cfc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.026401 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124cc06e-25c8-472d-8e36-5a411be55cfc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.026446 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124cc06e-25c8-472d-8e36-5a411be55cfc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.033639 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.033619358 podStartE2EDuration="1m9.033619358s" podCreationTimestamp="2025-09-30 17:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.016616141 +0000 UTC m=+88.921662115" watchObservedRunningTime="2025-09-30 17:04:53.033619358 +0000 UTC m=+88.938665302" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127483 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124cc06e-25c8-472d-8e36-5a411be55cfc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127541 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124cc06e-25c8-472d-8e36-5a411be55cfc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127606 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/124cc06e-25c8-472d-8e36-5a411be55cfc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127730 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127798 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.127845 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/124cc06e-25c8-472d-8e36-5a411be55cfc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.128708 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124cc06e-25c8-472d-8e36-5a411be55cfc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.138197 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124cc06e-25c8-472d-8e36-5a411be55cfc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.158901 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.158883911 podStartE2EDuration="38.158883911s" podCreationTimestamp="2025-09-30 17:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.158100132 +0000 UTC m=+89.063146106" watchObservedRunningTime="2025-09-30 17:04:53.158883911 +0000 UTC m=+89.063929855" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.159044 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podStartSLOduration=68.159038665 podStartE2EDuration="1m8.159038665s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.138240859 +0000 UTC m=+89.043286813" watchObservedRunningTime="2025-09-30 17:04:53.159038665 +0000 UTC m=+89.064084609" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.169178 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/124cc06e-25c8-472d-8e36-5a411be55cfc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hv255\" (UID: \"124cc06e-25c8-472d-8e36-5a411be55cfc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.185358 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.185337478 podStartE2EDuration="20.185337478s" podCreationTimestamp="2025-09-30 17:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.174387843 +0000 UTC m=+89.079433787" watchObservedRunningTime="2025-09-30 17:04:53.185337478 +0000 UTC m=+89.090383422" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.216326 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hbtzr" podStartSLOduration=68.216309401 podStartE2EDuration="1m8.216309401s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.197073722 +0000 UTC m=+89.102119676" watchObservedRunningTime="2025-09-30 17:04:53.216309401 +0000 UTC m=+89.121355345" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.219465 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.232053 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h9sjg" podStartSLOduration=68.232036738 podStartE2EDuration="1m8.232036738s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:53.219554226 +0000 UTC m=+89.124600170" watchObservedRunningTime="2025-09-30 17:04:53.232036738 +0000 UTC m=+89.137082682" Sep 30 17:04:53 crc kubenswrapper[4821]: W0930 17:04:53.246990 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124cc06e_25c8_472d_8e36_5a411be55cfc.slice/crio-bead3446fb46f3b71501b0bea4d9220eb34fdc339b76fc316f5833e83c32d1b4 WatchSource:0}: Error finding container bead3446fb46f3b71501b0bea4d9220eb34fdc339b76fc316f5833e83c32d1b4: Status 404 returned error can't find the container with id bead3446fb46f3b71501b0bea4d9220eb34fdc339b76fc316f5833e83c32d1b4 Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.706649 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:53 crc kubenswrapper[4821]: E0930 17:04:53.706756 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:53 crc kubenswrapper[4821]: I0930 17:04:53.706661 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:53 crc kubenswrapper[4821]: E0930 17:04:53.707717 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:54 crc kubenswrapper[4821]: I0930 17:04:54.249971 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" event={"ID":"124cc06e-25c8-472d-8e36-5a411be55cfc","Type":"ContainerStarted","Data":"0e6a13490c69ed3ac2f4fe89e9f3888011df8883ea639b8f27a5f82c0af1c04a"} Sep 30 17:04:54 crc kubenswrapper[4821]: I0930 17:04:54.250038 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" event={"ID":"124cc06e-25c8-472d-8e36-5a411be55cfc","Type":"ContainerStarted","Data":"bead3446fb46f3b71501b0bea4d9220eb34fdc339b76fc316f5833e83c32d1b4"} Sep 30 17:04:54 crc kubenswrapper[4821]: I0930 17:04:54.706965 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:54 crc kubenswrapper[4821]: I0930 17:04:54.707028 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:54 crc kubenswrapper[4821]: E0930 17:04:54.708174 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:54 crc kubenswrapper[4821]: E0930 17:04:54.708296 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:55 crc kubenswrapper[4821]: I0930 17:04:55.706366 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:55 crc kubenswrapper[4821]: I0930 17:04:55.706453 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:55 crc kubenswrapper[4821]: E0930 17:04:55.706499 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:55 crc kubenswrapper[4821]: E0930 17:04:55.706707 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:56 crc kubenswrapper[4821]: I0930 17:04:56.706782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:56 crc kubenswrapper[4821]: I0930 17:04:56.706820 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:56 crc kubenswrapper[4821]: E0930 17:04:56.706921 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:56 crc kubenswrapper[4821]: E0930 17:04:56.707039 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:57 crc kubenswrapper[4821]: I0930 17:04:57.706158 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:57 crc kubenswrapper[4821]: I0930 17:04:57.706449 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:57 crc kubenswrapper[4821]: E0930 17:04:57.706676 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:57 crc kubenswrapper[4821]: I0930 17:04:57.706875 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:04:57 crc kubenswrapper[4821]: E0930 17:04:57.706975 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:04:57 crc kubenswrapper[4821]: E0930 17:04:57.707139 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:04:58 crc kubenswrapper[4821]: I0930 17:04:58.706223 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:04:58 crc kubenswrapper[4821]: I0930 17:04:58.706349 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:04:58 crc kubenswrapper[4821]: E0930 17:04:58.706587 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:04:58 crc kubenswrapper[4821]: E0930 17:04:58.706451 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:04:59 crc kubenswrapper[4821]: I0930 17:04:59.706453 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:04:59 crc kubenswrapper[4821]: I0930 17:04:59.706448 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:04:59 crc kubenswrapper[4821]: E0930 17:04:59.706577 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:04:59 crc kubenswrapper[4821]: E0930 17:04:59.706624 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:00 crc kubenswrapper[4821]: I0930 17:05:00.706822 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:00 crc kubenswrapper[4821]: I0930 17:05:00.706875 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:00 crc kubenswrapper[4821]: E0930 17:05:00.708564 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:00 crc kubenswrapper[4821]: E0930 17:05:00.708456 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:01 crc kubenswrapper[4821]: I0930 17:05:01.706439 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:01 crc kubenswrapper[4821]: E0930 17:05:01.706534 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:01 crc kubenswrapper[4821]: I0930 17:05:01.706439 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:01 crc kubenswrapper[4821]: E0930 17:05:01.706597 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:02 crc kubenswrapper[4821]: I0930 17:05:02.706321 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:02 crc kubenswrapper[4821]: I0930 17:05:02.706397 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:02 crc kubenswrapper[4821]: E0930 17:05:02.706460 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:02 crc kubenswrapper[4821]: E0930 17:05:02.706590 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:03 crc kubenswrapper[4821]: I0930 17:05:03.705936 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:03 crc kubenswrapper[4821]: I0930 17:05:03.705960 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:03 crc kubenswrapper[4821]: E0930 17:05:03.706040 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:03 crc kubenswrapper[4821]: E0930 17:05:03.706180 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:04 crc kubenswrapper[4821]: I0930 17:05:04.706488 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:04 crc kubenswrapper[4821]: I0930 17:05:04.706511 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:04 crc kubenswrapper[4821]: E0930 17:05:04.707433 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:04 crc kubenswrapper[4821]: E0930 17:05:04.707487 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:04 crc kubenswrapper[4821]: I0930 17:05:04.746724 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:04 crc kubenswrapper[4821]: E0930 17:05:04.746899 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:05:04 crc kubenswrapper[4821]: E0930 17:05:04.746986 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs podName:3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc nodeName:}" failed. No retries permitted until 2025-09-30 17:06:08.74696251 +0000 UTC m=+164.652008494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs") pod "network-metrics-daemon-zkvtw" (UID: "3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:05:05 crc kubenswrapper[4821]: I0930 17:05:05.706500 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:05 crc kubenswrapper[4821]: I0930 17:05:05.706506 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:05 crc kubenswrapper[4821]: E0930 17:05:05.707042 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:05 crc kubenswrapper[4821]: E0930 17:05:05.706978 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:06 crc kubenswrapper[4821]: I0930 17:05:06.706143 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:06 crc kubenswrapper[4821]: I0930 17:05:06.706177 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:06 crc kubenswrapper[4821]: E0930 17:05:06.706383 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:06 crc kubenswrapper[4821]: E0930 17:05:06.706696 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:07 crc kubenswrapper[4821]: I0930 17:05:07.706828 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:07 crc kubenswrapper[4821]: I0930 17:05:07.706828 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:07 crc kubenswrapper[4821]: E0930 17:05:07.707320 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:07 crc kubenswrapper[4821]: E0930 17:05:07.707404 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:08 crc kubenswrapper[4821]: I0930 17:05:08.706051 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:08 crc kubenswrapper[4821]: I0930 17:05:08.706187 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:08 crc kubenswrapper[4821]: E0930 17:05:08.706407 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:08 crc kubenswrapper[4821]: E0930 17:05:08.706776 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:09 crc kubenswrapper[4821]: I0930 17:05:09.706891 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:09 crc kubenswrapper[4821]: I0930 17:05:09.706892 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:09 crc kubenswrapper[4821]: E0930 17:05:09.707249 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:09 crc kubenswrapper[4821]: E0930 17:05:09.707957 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:09 crc kubenswrapper[4821]: I0930 17:05:09.708443 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:05:09 crc kubenswrapper[4821]: E0930 17:05:09.708717 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:05:10 crc kubenswrapper[4821]: I0930 17:05:10.706390 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:10 crc kubenswrapper[4821]: I0930 17:05:10.706445 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:10 crc kubenswrapper[4821]: E0930 17:05:10.706663 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:10 crc kubenswrapper[4821]: E0930 17:05:10.706836 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:11 crc kubenswrapper[4821]: I0930 17:05:11.706238 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:11 crc kubenswrapper[4821]: I0930 17:05:11.706323 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:11 crc kubenswrapper[4821]: E0930 17:05:11.706397 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:11 crc kubenswrapper[4821]: E0930 17:05:11.706723 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:12 crc kubenswrapper[4821]: I0930 17:05:12.707105 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:12 crc kubenswrapper[4821]: I0930 17:05:12.707112 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:12 crc kubenswrapper[4821]: E0930 17:05:12.708207 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:12 crc kubenswrapper[4821]: E0930 17:05:12.708012 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:13 crc kubenswrapper[4821]: I0930 17:05:13.706291 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:13 crc kubenswrapper[4821]: E0930 17:05:13.706502 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:13 crc kubenswrapper[4821]: I0930 17:05:13.706737 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:13 crc kubenswrapper[4821]: E0930 17:05:13.706812 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:14 crc kubenswrapper[4821]: I0930 17:05:14.707302 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:14 crc kubenswrapper[4821]: I0930 17:05:14.707302 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:14 crc kubenswrapper[4821]: E0930 17:05:14.708577 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:14 crc kubenswrapper[4821]: E0930 17:05:14.708710 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:15 crc kubenswrapper[4821]: I0930 17:05:15.706860 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:15 crc kubenswrapper[4821]: I0930 17:05:15.706881 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:15 crc kubenswrapper[4821]: E0930 17:05:15.707000 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:15 crc kubenswrapper[4821]: E0930 17:05:15.707130 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:16 crc kubenswrapper[4821]: I0930 17:05:16.706420 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:16 crc kubenswrapper[4821]: I0930 17:05:16.706423 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:16 crc kubenswrapper[4821]: E0930 17:05:16.706655 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:16 crc kubenswrapper[4821]: E0930 17:05:16.706827 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:17 crc kubenswrapper[4821]: I0930 17:05:17.706369 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:17 crc kubenswrapper[4821]: E0930 17:05:17.706477 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:17 crc kubenswrapper[4821]: I0930 17:05:17.706624 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:17 crc kubenswrapper[4821]: E0930 17:05:17.706666 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:18 crc kubenswrapper[4821]: I0930 17:05:18.707025 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:18 crc kubenswrapper[4821]: E0930 17:05:18.707799 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:18 crc kubenswrapper[4821]: I0930 17:05:18.707853 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:18 crc kubenswrapper[4821]: E0930 17:05:18.708030 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:19 crc kubenswrapper[4821]: I0930 17:05:19.706419 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:19 crc kubenswrapper[4821]: I0930 17:05:19.706522 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:19 crc kubenswrapper[4821]: E0930 17:05:19.706516 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:19 crc kubenswrapper[4821]: E0930 17:05:19.706578 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:20 crc kubenswrapper[4821]: I0930 17:05:20.706360 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:20 crc kubenswrapper[4821]: E0930 17:05:20.706499 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:20 crc kubenswrapper[4821]: I0930 17:05:20.706362 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:20 crc kubenswrapper[4821]: E0930 17:05:20.707118 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:20 crc kubenswrapper[4821]: I0930 17:05:20.707589 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:05:20 crc kubenswrapper[4821]: E0930 17:05:20.707830 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k7m5w_openshift-ovn-kubernetes(6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.325941 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/1.log" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.326600 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/0.log" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.326654 4821 generic.go:334] "Generic (PLEG): container finished" podID="c84981f2-eb86-4d0d-9322-db1b62feeac8" containerID="bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66" exitCode=1 Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.326688 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerDied","Data":"bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66"} Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.326726 4821 scope.go:117] "RemoveContainer" containerID="9c71c6c5665a126542cd6cbe1e336b1bb53bdc864bd231b18ec046e830239edd" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.327048 4821 scope.go:117] "RemoveContainer" containerID="bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66" Sep 30 17:05:21 crc kubenswrapper[4821]: E0930 17:05:21.327207 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h9sjg_openshift-multus(c84981f2-eb86-4d0d-9322-db1b62feeac8)\"" pod="openshift-multus/multus-h9sjg" podUID="c84981f2-eb86-4d0d-9322-db1b62feeac8" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.348979 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hv255" podStartSLOduration=96.34895898 podStartE2EDuration="1m36.34895898s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:04:54.270117629 +0000 UTC m=+90.175163573" watchObservedRunningTime="2025-09-30 17:05:21.34895898 +0000 UTC m=+117.254004924" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.706250 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:21 crc kubenswrapper[4821]: I0930 17:05:21.706310 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:21 crc kubenswrapper[4821]: E0930 17:05:21.706388 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:21 crc kubenswrapper[4821]: E0930 17:05:21.706481 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:22 crc kubenswrapper[4821]: I0930 17:05:22.330368 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/1.log" Sep 30 17:05:22 crc kubenswrapper[4821]: I0930 17:05:22.706696 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:22 crc kubenswrapper[4821]: I0930 17:05:22.706776 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:22 crc kubenswrapper[4821]: E0930 17:05:22.706879 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:22 crc kubenswrapper[4821]: E0930 17:05:22.706958 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:23 crc kubenswrapper[4821]: I0930 17:05:23.706770 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:23 crc kubenswrapper[4821]: I0930 17:05:23.706786 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:23 crc kubenswrapper[4821]: E0930 17:05:23.706900 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:23 crc kubenswrapper[4821]: E0930 17:05:23.706984 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:24 crc kubenswrapper[4821]: I0930 17:05:24.706623 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:24 crc kubenswrapper[4821]: I0930 17:05:24.707159 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:24 crc kubenswrapper[4821]: E0930 17:05:24.707606 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:24 crc kubenswrapper[4821]: E0930 17:05:24.708386 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:24 crc kubenswrapper[4821]: E0930 17:05:24.721307 4821 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Sep 30 17:05:24 crc kubenswrapper[4821]: E0930 17:05:24.802712 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 17:05:25 crc kubenswrapper[4821]: I0930 17:05:25.707198 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:25 crc kubenswrapper[4821]: I0930 17:05:25.707285 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:25 crc kubenswrapper[4821]: E0930 17:05:25.707341 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:25 crc kubenswrapper[4821]: E0930 17:05:25.707494 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:26 crc kubenswrapper[4821]: I0930 17:05:26.706989 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:26 crc kubenswrapper[4821]: E0930 17:05:26.707126 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:26 crc kubenswrapper[4821]: I0930 17:05:26.707161 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:26 crc kubenswrapper[4821]: E0930 17:05:26.707313 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:27 crc kubenswrapper[4821]: I0930 17:05:27.706430 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:27 crc kubenswrapper[4821]: E0930 17:05:27.706594 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:27 crc kubenswrapper[4821]: I0930 17:05:27.706844 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:27 crc kubenswrapper[4821]: E0930 17:05:27.706930 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:28 crc kubenswrapper[4821]: I0930 17:05:28.706464 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:28 crc kubenswrapper[4821]: I0930 17:05:28.706521 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:28 crc kubenswrapper[4821]: E0930 17:05:28.706603 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:28 crc kubenswrapper[4821]: E0930 17:05:28.706747 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:29 crc kubenswrapper[4821]: I0930 17:05:29.706306 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:29 crc kubenswrapper[4821]: I0930 17:05:29.706343 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:29 crc kubenswrapper[4821]: E0930 17:05:29.706433 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:29 crc kubenswrapper[4821]: E0930 17:05:29.706525 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:29 crc kubenswrapper[4821]: E0930 17:05:29.804640 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 17:05:30 crc kubenswrapper[4821]: I0930 17:05:30.706402 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:30 crc kubenswrapper[4821]: I0930 17:05:30.706490 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:30 crc kubenswrapper[4821]: E0930 17:05:30.706854 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:30 crc kubenswrapper[4821]: E0930 17:05:30.707076 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:31 crc kubenswrapper[4821]: I0930 17:05:31.706279 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:31 crc kubenswrapper[4821]: I0930 17:05:31.706335 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:31 crc kubenswrapper[4821]: E0930 17:05:31.706479 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:31 crc kubenswrapper[4821]: E0930 17:05:31.706681 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:32 crc kubenswrapper[4821]: I0930 17:05:32.706610 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:32 crc kubenswrapper[4821]: E0930 17:05:32.706764 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:32 crc kubenswrapper[4821]: I0930 17:05:32.706805 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:32 crc kubenswrapper[4821]: E0930 17:05:32.706922 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:33 crc kubenswrapper[4821]: I0930 17:05:33.706329 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:33 crc kubenswrapper[4821]: I0930 17:05:33.706849 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:33 crc kubenswrapper[4821]: E0930 17:05:33.707038 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:33 crc kubenswrapper[4821]: E0930 17:05:33.707264 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:33 crc kubenswrapper[4821]: I0930 17:05:33.707500 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.364485 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/3.log" Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.366710 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerStarted","Data":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.367808 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.398024 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podStartSLOduration=108.39800962 podStartE2EDuration="1m48.39800962s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:34.397679112 +0000 UTC m=+130.302725066" watchObservedRunningTime="2025-09-30 17:05:34.39800962 +0000 UTC m=+130.303055564" Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.555307 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zkvtw"] Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.555406 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:34 crc kubenswrapper[4821]: E0930 17:05:34.555484 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:34 crc kubenswrapper[4821]: I0930 17:05:34.706992 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:34 crc kubenswrapper[4821]: E0930 17:05:34.709025 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:34 crc kubenswrapper[4821]: E0930 17:05:34.805158 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 17:05:35 crc kubenswrapper[4821]: I0930 17:05:35.706068 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:35 crc kubenswrapper[4821]: I0930 17:05:35.706256 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:35 crc kubenswrapper[4821]: E0930 17:05:35.706660 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:35 crc kubenswrapper[4821]: E0930 17:05:35.706514 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:35 crc kubenswrapper[4821]: I0930 17:05:35.706335 4821 scope.go:117] "RemoveContainer" containerID="bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66" Sep 30 17:05:36 crc kubenswrapper[4821]: I0930 17:05:36.375985 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/1.log" Sep 30 17:05:36 crc kubenswrapper[4821]: I0930 17:05:36.376033 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerStarted","Data":"992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec"} Sep 30 17:05:36 crc kubenswrapper[4821]: I0930 17:05:36.706607 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:36 crc kubenswrapper[4821]: I0930 17:05:36.706634 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:36 crc kubenswrapper[4821]: E0930 17:05:36.707424 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:36 crc kubenswrapper[4821]: E0930 17:05:36.707555 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:37 crc kubenswrapper[4821]: I0930 17:05:37.706651 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:37 crc kubenswrapper[4821]: E0930 17:05:37.706766 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:37 crc kubenswrapper[4821]: I0930 17:05:37.706656 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:37 crc kubenswrapper[4821]: E0930 17:05:37.706958 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:38 crc kubenswrapper[4821]: I0930 17:05:38.706305 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:38 crc kubenswrapper[4821]: I0930 17:05:38.706395 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:38 crc kubenswrapper[4821]: E0930 17:05:38.706426 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkvtw" podUID="3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc" Sep 30 17:05:38 crc kubenswrapper[4821]: E0930 17:05:38.706614 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:05:39 crc kubenswrapper[4821]: I0930 17:05:39.705974 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:39 crc kubenswrapper[4821]: I0930 17:05:39.706142 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:39 crc kubenswrapper[4821]: E0930 17:05:39.706211 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:05:39 crc kubenswrapper[4821]: E0930 17:05:39.706421 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.706451 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.706611 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.710384 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.710518 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.710777 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Sep 30 17:05:40 crc kubenswrapper[4821]: I0930 17:05:40.712752 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Sep 30 17:05:41 crc kubenswrapper[4821]: I0930 17:05:41.706964 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:41 crc kubenswrapper[4821]: I0930 17:05:41.707006 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:41 crc kubenswrapper[4821]: I0930 17:05:41.712838 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Sep 30 17:05:41 crc kubenswrapper[4821]: I0930 17:05:41.712968 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.834464 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.879226 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gpsch"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.879967 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.881438 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9t59s"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.881827 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.883546 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsll6"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.884807 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.885150 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.885556 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.887981 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.888689 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.891273 4821 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.891308 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.891513 4821 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: secrets "openshift-config-operator-dockercfg-7pc5z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.891540 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-config-operator-dockercfg-7pc5z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.893341 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.894966 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.895332 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.895649 4821 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.895678 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.896399 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.896458 4821 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.896653 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.897556 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.897739 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.897923 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.898113 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.898247 4821 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.898271 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: W0930 17:05:43.898317 4821 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Sep 30 17:05:43 crc kubenswrapper[4821]: E0930 17:05:43.898330 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.898379 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.913915 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.914122 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.913916 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.914402 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.914789 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.915336 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.915543 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.916014 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.916756 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.917328 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.917722 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.918266 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.918619 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.919232 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.919356 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.919906 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.921397 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.921792 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.926987 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927191 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927455 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927616 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927702 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927775 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.927812 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.928111 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.928448 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.928542 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.929945 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.930339 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.936248 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.936811 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.937609 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kj7q"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.939037 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.940122 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4fxjh"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.940530 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.943176 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hn4g9"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.943863 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.953859 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wxvjw"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.954230 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.954443 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q92lq"] Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.954816 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.955133 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.955276 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:43 crc kubenswrapper[4821]: I0930 17:05:43.955679 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsll6"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.031117 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.031463 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.031629 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.032199 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.032505 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.032702 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.032969 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.033224 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.041755 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042044 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042347 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042399 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042711 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042785 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042934 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043020 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043067 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043122 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.042936 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043423 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043459 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043562 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043582 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043601 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.043807 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.044255 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.044310 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.044738 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.044816 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.045427 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.044741 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.045607 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.047658 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.047923 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.048135 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.048384 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.048464 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.072798 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.072997 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.073161 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.073364 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.075189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9t59s"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.075299 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076391 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076637 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076821 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076904 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076991 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.077069 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.076926 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.077878 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.078167 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.078442 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.078852 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079061 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079307 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079436 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079495 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-serving-cert\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079543 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079549 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079567 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkfc\" (UniqueName: \"kubernetes.io/projected/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-kube-api-access-dmkfc\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079607 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079635 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74612077-5860-4d75-8655-e48515893c20-metrics-tls\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079641 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079651 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079698 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079716 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079731 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnflc\" (UniqueName: \"kubernetes.io/projected/a617aa70-de94-4903-863e-0b10a2c9253d-kube-api-access-xnflc\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079738 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079771 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079788 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-encryption-config\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079802 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079816 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczkz\" (UniqueName: \"kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079856 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dx5\" (UniqueName: \"kubernetes.io/projected/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-kube-api-access-f6dx5\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079871 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079885 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079912 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079931 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079948 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.079963 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxjsr\" (UniqueName: \"kubernetes.io/projected/d87b4d2e-ced9-47af-8556-ddd4e0d57769-kube-api-access-jxjsr\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080001 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080012 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080030 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080046 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-machine-approver-tls\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080095 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080096 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080118 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9s4\" (UniqueName: \"kubernetes.io/projected/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-kube-api-access-fx9s4\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080133 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-client\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080149 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87b4d2e-ced9-47af-8556-ddd4e0d57769-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080191 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080213 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-auth-proxy-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080227 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dplxd\" (UniqueName: \"kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080268 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-dir\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080286 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjwr\" (UniqueName: \"kubernetes.io/projected/ccaea5c4-3efc-48ab-8159-6db6f5f77555-kube-api-access-lrjwr\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080302 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75516b13-a330-4e17-a2e1-bd1c04ad9500-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080344 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit-dir\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080377 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080393 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080434 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-client\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080450 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgnj\" (UniqueName: \"kubernetes.io/projected/2f3a466e-6003-48b2-b71d-7047639f3548-kube-api-access-rxgnj\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080466 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080483 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080508 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080523 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-images\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080537 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-trusted-ca\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080579 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080597 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-config\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080611 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwwv\" (UniqueName: \"kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080640 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-serving-cert\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080655 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080686 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn42\" (UniqueName: \"kubernetes.io/projected/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-kube-api-access-lfn42\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080701 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080724 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080738 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080754 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080793 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080811 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-service-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080826 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-encryption-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080841 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm59z\" (UniqueName: \"kubernetes.io/projected/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-kube-api-access-tm59z\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080866 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-config\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080881 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjk4p\" (UniqueName: \"kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080897 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080911 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgslc\" (UniqueName: \"kubernetes.io/projected/74612077-5860-4d75-8655-e48515893c20-kube-api-access-jgslc\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080925 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-policies\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdfp\" (UniqueName: \"kubernetes.io/projected/be92daba-f247-4cec-80cf-a858c1fc034e-kube-api-access-cqdfp\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080955 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-client\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080968 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wnr\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-kube-api-access-48wnr\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080983 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.080996 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-service-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081010 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a617aa70-de94-4903-863e-0b10a2c9253d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081024 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-serving-cert\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081040 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081055 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081069 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081137 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflpx\" (UniqueName: \"kubernetes.io/projected/75516b13-a330-4e17-a2e1-bd1c04ad9500-kube-api-access-hflpx\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081194 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb3298cf-506d-4b81-a283-801f689c5db6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081218 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-image-import-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081247 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081262 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081278 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-config\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081294 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081309 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081323 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081339 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snntl\" (UniqueName: \"kubernetes.io/projected/f0fb9646-336c-4014-92ca-bb5caa55dde5-kube-api-access-snntl\") pod \"downloads-7954f5f757-4fxjh\" (UID: \"f0fb9646-336c-4014-92ca-bb5caa55dde5\") " pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081363 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gpsch"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081386 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be92daba-f247-4cec-80cf-a858c1fc034e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081479 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3a466e-6003-48b2-b71d-7047639f3548-serving-cert\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081608 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-config\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081582 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081618 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081701 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081745 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87b4d2e-ced9-47af-8556-ddd4e0d57769-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081789 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be92daba-f247-4cec-80cf-a858c1fc034e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.081821 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb3298cf-506d-4b81-a283-801f689c5db6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.106514 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.121373 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.122014 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.128492 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.128702 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.129736 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.130182 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.137036 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.137573 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.139437 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.139575 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.140196 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.141233 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.145003 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.145482 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.146362 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmps9"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.146769 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.147021 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.147261 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.148738 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.149068 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.149422 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.149496 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.150310 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.151409 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.152737 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.153642 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.154325 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.154622 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.154861 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.154983 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.155789 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.165375 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.168533 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.169740 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.174925 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.181642 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.194637 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.194783 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.195409 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.195505 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.195531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.196033 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.196144 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.196179 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.196195 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-service-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.197149 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.197485 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.197957 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.198035 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.198240 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.198290 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qgksb"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.198382 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.198814 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.199957 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.200313 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-encryption-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201252 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm59z\" (UniqueName: \"kubernetes.io/projected/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-kube-api-access-tm59z\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201281 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-config\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjk4p\" (UniqueName: \"kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201323 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201343 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgslc\" (UniqueName: \"kubernetes.io/projected/74612077-5860-4d75-8655-e48515893c20-kube-api-access-jgslc\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201359 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-policies\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201375 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdfp\" (UniqueName: \"kubernetes.io/projected/be92daba-f247-4cec-80cf-a858c1fc034e-kube-api-access-cqdfp\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201394 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-client\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201409 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wnr\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-kube-api-access-48wnr\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201442 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-service-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201456 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a617aa70-de94-4903-863e-0b10a2c9253d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201470 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-serving-cert\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201500 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201514 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflpx\" (UniqueName: \"kubernetes.io/projected/75516b13-a330-4e17-a2e1-bd1c04ad9500-kube-api-access-hflpx\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb3298cf-506d-4b81-a283-801f689c5db6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201563 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-image-import-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201577 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201593 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201622 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-config\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201638 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201661 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201698 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snntl\" (UniqueName: \"kubernetes.io/projected/f0fb9646-336c-4014-92ca-bb5caa55dde5-kube-api-access-snntl\") pod \"downloads-7954f5f757-4fxjh\" (UID: \"f0fb9646-336c-4014-92ca-bb5caa55dde5\") " pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201714 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be92daba-f247-4cec-80cf-a858c1fc034e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3a466e-6003-48b2-b71d-7047639f3548-serving-cert\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201749 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-config\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201767 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87b4d2e-ced9-47af-8556-ddd4e0d57769-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be92daba-f247-4cec-80cf-a858c1fc034e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201819 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb3298cf-506d-4b81-a283-801f689c5db6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201855 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-serving-cert\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201871 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201890 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkfc\" (UniqueName: \"kubernetes.io/projected/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-kube-api-access-dmkfc\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201905 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201931 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74612077-5860-4d75-8655-e48515893c20-metrics-tls\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201947 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201950 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a617aa70-de94-4903-863e-0b10a2c9253d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201962 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.201981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202001 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnflc\" (UniqueName: \"kubernetes.io/projected/a617aa70-de94-4903-863e-0b10a2c9253d-kube-api-access-xnflc\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202017 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202034 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-encryption-config\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202050 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202067 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hczkz\" (UniqueName: \"kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202097 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dx5\" (UniqueName: \"kubernetes.io/projected/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-kube-api-access-f6dx5\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202115 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202130 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202162 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202177 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202194 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxjsr\" (UniqueName: \"kubernetes.io/projected/d87b4d2e-ced9-47af-8556-ddd4e0d57769-kube-api-access-jxjsr\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202209 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202222 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-machine-approver-tls\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202255 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202274 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9s4\" (UniqueName: \"kubernetes.io/projected/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-kube-api-access-fx9s4\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202289 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-client\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202307 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87b4d2e-ced9-47af-8556-ddd4e0d57769-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202323 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202354 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-auth-proxy-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202372 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dplxd\" (UniqueName: \"kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202675 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-dir\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202693 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjwr\" (UniqueName: \"kubernetes.io/projected/ccaea5c4-3efc-48ab-8159-6db6f5f77555-kube-api-access-lrjwr\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202708 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-config\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202709 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75516b13-a330-4e17-a2e1-bd1c04ad9500-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202751 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202773 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit-dir\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202796 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202816 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202831 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-client\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202849 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxgnj\" (UniqueName: \"kubernetes.io/projected/2f3a466e-6003-48b2-b71d-7047639f3548-kube-api-access-rxgnj\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202867 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202884 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202899 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202914 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-images\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202931 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-trusted-ca\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202951 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.202969 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-config\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203033 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwwv\" (UniqueName: \"kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203050 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-serving-cert\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203068 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203107 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203124 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfn42\" (UniqueName: \"kubernetes.io/projected/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-kube-api-access-lfn42\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203139 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.203903 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.204812 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.206366 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.206392 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q92lq"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.206402 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kj7q"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.206774 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-service-ca-bundle\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.206929 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.207538 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-encryption-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.207617 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.208190 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-config\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.209447 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.210127 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.210147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.210576 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-policies\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.211018 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.211571 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-serving-cert\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.212147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.212732 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-config\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.213596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-client\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.214340 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.214367 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.214783 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.214883 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.215355 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.215542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-service-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.215882 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit-dir\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.216160 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75516b13-a330-4e17-a2e1-bd1c04ad9500-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.216225 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.216988 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.217497 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-etcd-serving-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.218576 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-serving-cert\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.219651 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.219741 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.220348 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-ca\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.220420 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.221099 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75516b13-a330-4e17-a2e1-bd1c04ad9500-images\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.221340 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f7fvc"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.221891 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74612077-5860-4d75-8655-e48515893c20-metrics-tls\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.222065 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.222438 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-trusted-ca\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.222921 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb3298cf-506d-4b81-a283-801f689c5db6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.223641 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-image-import-ca\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.224074 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.224476 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-config\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.224522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-node-pullsecrets\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.224946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-audit\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.225131 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.225858 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.226281 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.226719 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.227206 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.227671 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hlfh2"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.227983 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4c49j"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228218 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3a466e-6003-48b2-b71d-7047639f3548-config\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228452 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hn4g9"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228517 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228691 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228814 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.228946 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.229998 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.230364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-encryption-config\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.231613 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.235988 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.239808 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.239824 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.239398 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.240732 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.240917 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.238972 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be92daba-f247-4cec-80cf-a858c1fc034e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.242142 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.242743 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.243485 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87b4d2e-ced9-47af-8556-ddd4e0d57769-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.243501 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.243894 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb3298cf-506d-4b81-a283-801f689c5db6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.244325 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccaea5c4-3efc-48ab-8159-6db6f5f77555-audit-dir\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.244414 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-auth-proxy-config\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.244488 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.244974 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.245542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccaea5c4-3efc-48ab-8159-6db6f5f77555-etcd-client\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.245561 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.247826 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.248635 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-serving-cert\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.248966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-etcd-client\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.249450 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.249739 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.250320 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87b4d2e-ced9-47af-8556-ddd4e0d57769-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.251289 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.253042 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.253994 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.254374 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-machine-approver-tls\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.254471 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f3a466e-6003-48b2-b71d-7047639f3548-serving-cert\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.254651 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.255125 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be92daba-f247-4cec-80cf-a858c1fc034e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.255377 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.256454 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.258899 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.258953 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmps9"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.258962 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.264863 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.267378 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wxvjw"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.276750 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.279227 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4fxjh"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.282183 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.282663 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.283683 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.284746 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ldwrh"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.285613 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.286258 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.287514 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hlfh2"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.288380 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.289860 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f7fvc"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.290588 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.291876 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4c49j"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.292937 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.293937 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k25bw"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.295009 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.295095 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.296403 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k25bw"] Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.302075 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303734 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-srv-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303785 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9j7\" (UniqueName: \"kubernetes.io/projected/901c3ade-323e-469e-a9dd-6e568baadded-kube-api-access-2l9j7\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303812 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcff15ff-1913-4b35-bb98-0942fc11bdf3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7qb\" (UniqueName: \"kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303923 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/901c3ade-323e-469e-a9dd-6e568baadded-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.303948 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304002 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-config\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304033 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304094 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/901c3ade-323e-469e-a9dd-6e568baadded-proxy-tls\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304120 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304186 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzjx\" (UniqueName: \"kubernetes.io/projected/21855650-5cbc-49eb-8d6c-d6846546769d-kube-api-access-jxzjx\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304201 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304235 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304250 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff15ff-1913-4b35-bb98-0942fc11bdf3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304282 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304327 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-images\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304391 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4td\" (UniqueName: \"kubernetes.io/projected/4241c54f-7e90-4a5b-91d2-4904fd633c35-kube-api-access-dg4td\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304422 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4241c54f-7e90-4a5b-91d2-4904fd633c35-proxy-tls\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.304444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcff15ff-1913-4b35-bb98-0942fc11bdf3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.322103 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.341527 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.362032 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.382386 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.401592 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405415 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-images\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405462 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4td\" (UniqueName: \"kubernetes.io/projected/4241c54f-7e90-4a5b-91d2-4904fd633c35-kube-api-access-dg4td\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405501 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4241c54f-7e90-4a5b-91d2-4904fd633c35-proxy-tls\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405546 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcff15ff-1913-4b35-bb98-0942fc11bdf3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405601 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-srv-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9j7\" (UniqueName: \"kubernetes.io/projected/901c3ade-323e-469e-a9dd-6e568baadded-kube-api-access-2l9j7\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405635 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcff15ff-1913-4b35-bb98-0942fc11bdf3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405657 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs7qb\" (UniqueName: \"kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405684 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/901c3ade-323e-469e-a9dd-6e568baadded-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-config\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405755 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405783 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/901c3ade-323e-469e-a9dd-6e568baadded-proxy-tls\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405813 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405851 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzjx\" (UniqueName: \"kubernetes.io/projected/21855650-5cbc-49eb-8d6c-d6846546769d-kube-api-access-jxzjx\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405869 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405884 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405898 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff15ff-1913-4b35-bb98-0942fc11bdf3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.405925 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.407015 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.407187 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-config\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.407289 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/901c3ade-323e-469e-a9dd-6e568baadded-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.410354 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.421741 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.430662 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-srv-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.441839 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.461513 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.469377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.469381 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21855650-5cbc-49eb-8d6c-d6846546769d-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.481850 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.501141 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.522294 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.529748 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/901c3ade-323e-469e-a9dd-6e568baadded-proxy-tls\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.541575 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.562412 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.581602 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.601746 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.622494 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.642500 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.647717 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.661998 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.682566 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.701980 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.721864 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.726840 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4241c54f-7e90-4a5b-91d2-4904fd633c35-images\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.742479 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.762310 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.781759 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.802031 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.822737 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.841866 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.861902 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.881619 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.902777 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.910120 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4241c54f-7e90-4a5b-91d2-4904fd633c35-proxy-tls\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.921379 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.942443 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.961923 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.971982 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcff15ff-1913-4b35-bb98-0942fc11bdf3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.982144 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Sep 30 17:05:44 crc kubenswrapper[4821]: I0930 17:05:44.986980 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcff15ff-1913-4b35-bb98-0942fc11bdf3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.022104 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.043465 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.062074 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.082239 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.103078 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.122076 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.143158 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.171690 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.182250 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.201224 4821 request.go:700] Waited for 1.002196792s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.203170 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.213967 4821 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.215207 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert podName:a617aa70-de94-4903-863e-0b10a2c9253d nodeName:}" failed. No retries permitted until 2025-09-30 17:05:45.715152461 +0000 UTC m=+141.620198615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert") pod "openshift-config-operator-7777fb866f-9t59s" (UID: "a617aa70-de94-4903-863e-0b10a2c9253d") : failed to sync secret cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.221348 4821 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.221537 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle podName:699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a nodeName:}" failed. No retries permitted until 2025-09-30 17:05:45.721489529 +0000 UTC m=+141.626535603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle") pod "apiserver-76f77b778f-jsll6" (UID: "699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a") : failed to sync configmap cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.223070 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.223946 4821 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: E0930 17:05:45.224046 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert podName:699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a nodeName:}" failed. No retries permitted until 2025-09-30 17:05:45.724026563 +0000 UTC m=+141.629072717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert") pod "apiserver-76f77b778f-jsll6" (UID: "699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a") : failed to sync secret cache: timed out waiting for the condition Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.242289 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.263125 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.282641 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.302778 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.322624 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.343494 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.393860 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm59z\" (UniqueName: \"kubernetes.io/projected/2adde79f-8b11-4ac3-88b6-ea7d2e6c5870-kube-api-access-tm59z\") pod \"cluster-samples-operator-665b6dd947-rbgqj\" (UID: \"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.396799 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjk4p\" (UniqueName: \"kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p\") pod \"console-f9d7485db-lzvgr\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.418463 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwwv\" (UniqueName: \"kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv\") pod \"oauth-openshift-558db77b4-qrpnr\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.443793 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxgnj\" (UniqueName: \"kubernetes.io/projected/2f3a466e-6003-48b2-b71d-7047639f3548-kube-api-access-rxgnj\") pod \"authentication-operator-69f744f599-wxvjw\" (UID: \"2f3a466e-6003-48b2-b71d-7047639f3548\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.456835 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgslc\" (UniqueName: \"kubernetes.io/projected/74612077-5860-4d75-8655-e48515893c20-kube-api-access-jgslc\") pod \"dns-operator-744455d44c-q92lq\" (UID: \"74612077-5860-4d75-8655-e48515893c20\") " pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.465997 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.475427 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdfp\" (UniqueName: \"kubernetes.io/projected/be92daba-f247-4cec-80cf-a858c1fc034e-kube-api-access-cqdfp\") pod \"openshift-apiserver-operator-796bbdcf4f-wstkc\" (UID: \"be92daba-f247-4cec-80cf-a858c1fc034e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.506010 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.508137 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfn42\" (UniqueName: \"kubernetes.io/projected/1366fe1c-9d0b-4a6e-bfa4-7ced09204637-kube-api-access-lfn42\") pod \"etcd-operator-b45778765-hn4g9\" (UID: \"1366fe1c-9d0b-4a6e-bfa4-7ced09204637\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.517991 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wnr\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-kube-api-access-48wnr\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.522182 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.557329 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxjsr\" (UniqueName: \"kubernetes.io/projected/d87b4d2e-ced9-47af-8556-ddd4e0d57769-kube-api-access-jxjsr\") pod \"openshift-controller-manager-operator-756b6f6bc6-zhl6b\" (UID: \"d87b4d2e-ced9-47af-8556-ddd4e0d57769\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.579141 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.582375 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.583991 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb3298cf-506d-4b81-a283-801f689c5db6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6tqpl\" (UID: \"eb3298cf-506d-4b81-a283-801f689c5db6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.602353 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.613958 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.625325 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.647317 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.656007 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.657770 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.662204 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.668741 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.685039 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.706779 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snntl\" (UniqueName: \"kubernetes.io/projected/f0fb9646-336c-4014-92ca-bb5caa55dde5-kube-api-access-snntl\") pod \"downloads-7954f5f757-4fxjh\" (UID: \"f0fb9646-336c-4014-92ca-bb5caa55dde5\") " pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.723497 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.723613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.726883 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.727534 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflpx\" (UniqueName: \"kubernetes.io/projected/75516b13-a330-4e17-a2e1-bd1c04ad9500-kube-api-access-hflpx\") pod \"machine-api-operator-5694c8668f-gpsch\" (UID: \"75516b13-a330-4e17-a2e1-bd1c04ad9500\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:45 crc kubenswrapper[4821]: W0930 17:05:45.755564 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d6cb47_472a_4bda_bfc0_738029e84e40.slice/crio-c1ce6d77d61b7eb78bac2195584ecc11d9d07d017a73197ef559965aa7abc73e WatchSource:0}: Error finding container c1ce6d77d61b7eb78bac2195584ecc11d9d07d017a73197ef559965aa7abc73e: Status 404 returned error can't find the container with id c1ce6d77d61b7eb78bac2195584ecc11d9d07d017a73197ef559965aa7abc73e Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.765787 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hczkz\" (UniqueName: \"kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz\") pod \"route-controller-manager-6576b87f9c-bsh9r\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.777440 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.778747 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dx5\" (UniqueName: \"kubernetes.io/projected/5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b-kube-api-access-f6dx5\") pod \"console-operator-58897d9998-5kj7q\" (UID: \"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b\") " pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.781906 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.795566 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.807562 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.822327 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.824585 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.851822 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.858147 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj"] Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.861805 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.883563 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.903184 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.903268 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.920140 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc"] Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.922876 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.928968 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.940619 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.945136 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.965696 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Sep 30 17:05:45 crc kubenswrapper[4821]: I0930 17:05:45.982696 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.002434 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.027142 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.027742 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.042241 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.061662 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.066990 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hn4g9"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.094967 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.102326 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.136887 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.147467 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9s4\" (UniqueName: \"kubernetes.io/projected/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-kube-api-access-fx9s4\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.155133 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b"] Sep 30 17:05:46 crc kubenswrapper[4821]: W0930 17:05:46.158595 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb3298cf_506d_4b81_a283_801f689c5db6.slice/crio-307665b7008d53a7a89aee6654e5ae86f53a008071af9ce5d1d033c326b341a8 WatchSource:0}: Error finding container 307665b7008d53a7a89aee6654e5ae86f53a008071af9ce5d1d033c326b341a8: Status 404 returned error can't find the container with id 307665b7008d53a7a89aee6654e5ae86f53a008071af9ce5d1d033c326b341a8 Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.162633 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkfc\" (UniqueName: \"kubernetes.io/projected/e4f2fba6-7528-40dc-8c18-6ca44115cf2a-kube-api-access-dmkfc\") pod \"machine-approver-56656f9798-k5fcf\" (UID: \"e4f2fba6-7528-40dc-8c18-6ca44115cf2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.194049 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dplxd\" (UniqueName: \"kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd\") pod \"controller-manager-879f6c89f-8pkhf\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.226512 4821 request.go:700] Waited for 1.940618988s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.232759 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjwr\" (UniqueName: \"kubernetes.io/projected/ccaea5c4-3efc-48ab-8159-6db6f5f77555-kube-api-access-lrjwr\") pod \"apiserver-7bbb656c7d-lcmls\" (UID: \"ccaea5c4-3efc-48ab-8159-6db6f5f77555\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.234262 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.234449 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q92lq"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.240663 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wxvjw"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.245572 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.261774 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.281822 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Sep 30 17:05:46 crc kubenswrapper[4821]: W0930 17:05:46.287303 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3a466e_6003_48b2_b71d_7047639f3548.slice/crio-09bea913daeff68cbba7ff36693beb11e1be862a3c51c0e48823d7a1e3938b73 WatchSource:0}: Error finding container 09bea913daeff68cbba7ff36693beb11e1be862a3c51c0e48823d7a1e3938b73: Status 404 returned error can't find the container with id 09bea913daeff68cbba7ff36693beb11e1be862a3c51c0e48823d7a1e3938b73 Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.302016 4821 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.322620 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.342522 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.360813 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd286c93-29a6-48e6-b22c-fa70d5bf8e21-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-nhz6b\" (UID: \"fd286c93-29a6-48e6-b22c-fa70d5bf8e21\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.377100 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4td\" (UniqueName: \"kubernetes.io/projected/4241c54f-7e90-4a5b-91d2-4904fd633c35-kube-api-access-dg4td\") pod \"machine-config-operator-74547568cd-wvc92\" (UID: \"4241c54f-7e90-4a5b-91d2-4904fd633c35\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.382310 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" Sep 30 17:05:46 crc kubenswrapper[4821]: W0930 17:05:46.382445 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e281506_b9d7_4e26_964f_e472f7f2661f.slice/crio-1142dffcc9c5f8f001dfe8f38d65d45ce5b8083d88785af3cdb887962639d8c5 WatchSource:0}: Error finding container 1142dffcc9c5f8f001dfe8f38d65d45ce5b8083d88785af3cdb887962639d8c5: Status 404 returned error can't find the container with id 1142dffcc9c5f8f001dfe8f38d65d45ce5b8083d88785af3cdb887962639d8c5 Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.384902 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4fxjh"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.395936 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.397723 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcff15ff-1913-4b35-bb98-0942fc11bdf3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gwrn\" (UID: \"dcff15ff-1913-4b35-bb98-0942fc11bdf3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.433392 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9j7\" (UniqueName: \"kubernetes.io/projected/901c3ade-323e-469e-a9dd-6e568baadded-kube-api-access-2l9j7\") pod \"machine-config-controller-84d6567774-lw2j2\" (UID: \"901c3ade-323e-469e-a9dd-6e568baadded\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.440387 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs7qb\" (UniqueName: \"kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb\") pod \"collect-profiles-29320860-mxq6r\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.450055 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.450858 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.452445 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.468748 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.469259 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.475937 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzjx\" (UniqueName: \"kubernetes.io/projected/21855650-5cbc-49eb-8d6c-d6846546769d-kube-api-access-jxzjx\") pod \"catalog-operator-68c6474976-gtlgs\" (UID: \"21855650-5cbc-49eb-8d6c-d6846546769d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.484063 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a617aa70-de94-4903-863e-0b10a2c9253d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.486979 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.492660 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kj7q"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.511969 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.513783 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4fxjh" event={"ID":"f0fb9646-336c-4014-92ca-bb5caa55dde5","Type":"ContainerStarted","Data":"7673b89d2382299de924e68b6ceff529bba472c18eea635633f6109b83ad0dee"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.516371 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.519956 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gpsch"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.522808 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" event={"ID":"eb3298cf-506d-4b81-a283-801f689c5db6","Type":"ContainerStarted","Data":"307665b7008d53a7a89aee6654e5ae86f53a008071af9ce5d1d033c326b341a8"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.529268 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" event={"ID":"be92daba-f247-4cec-80cf-a858c1fc034e","Type":"ContainerStarted","Data":"b6d898b7b85afd469a9857a2cff9e9de48662d5fb358f0fa9e6fe428d6ab59e5"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.529313 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" event={"ID":"be92daba-f247-4cec-80cf-a858c1fc034e","Type":"ContainerStarted","Data":"53e2379c7a1541303ba2b4f84fe7f1660460fab116a0196c8444617e945b5226"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.533303 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" event={"ID":"2f3a466e-6003-48b2-b71d-7047639f3548","Type":"ContainerStarted","Data":"09bea913daeff68cbba7ff36693beb11e1be862a3c51c0e48823d7a1e3938b73"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.536436 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" event={"ID":"1366fe1c-9d0b-4a6e-bfa4-7ced09204637","Type":"ContainerStarted","Data":"5af483915559a30d315530786c37478e153b6ccc5335f2512ca109e6ecaed4d6"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.538882 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" event={"ID":"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870","Type":"ContainerStarted","Data":"330d4d86db669aa213845149599003354b74e6ff596fbf478c8c640c9244f39c"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.538905 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" event={"ID":"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870","Type":"ContainerStarted","Data":"d6eab31d1af213d99f0fcb6f26977071a0d1cbdf236ae84616faf4d653cfe481"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.542552 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" event={"ID":"0e281506-b9d7-4e26-964f-e472f7f2661f","Type":"ContainerStarted","Data":"1142dffcc9c5f8f001dfe8f38d65d45ce5b8083d88785af3cdb887962639d8c5"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.544384 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.545255 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" event={"ID":"d87b4d2e-ced9-47af-8556-ddd4e0d57769","Type":"ContainerStarted","Data":"4b9b28c16e1d7890f96afd6e54b5ca8d99497186772312c01350e800c6f65599"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552667 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b098eac-8578-4bea-ae1d-af41fc24e2b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552722 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrxl\" (UniqueName: \"kubernetes.io/projected/e116d50f-86db-4a6c-bb71-938d87196a40-kube-api-access-lzrxl\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552773 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552809 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5690d819-973e-4ded-a4d5-b6e7bb691c54-config\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552838 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.552855 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9lz\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.553544 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnflc\" (UniqueName: \"kubernetes.io/projected/a617aa70-de94-4903-863e-0b10a2c9253d-kube-api-access-xnflc\") pod \"openshift-config-operator-7777fb866f-9t59s\" (UID: \"a617aa70-de94-4903-863e-0b10a2c9253d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.553810 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5690d819-973e-4ded-a4d5-b6e7bb691c54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.553898 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21458848-8fef-4265-a871-d5cdb5ac937e-config\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.553967 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21458848-8fef-4265-a871-d5cdb5ac937e-serving-cert\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554040 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e116d50f-86db-4a6c-bb71-938d87196a40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554184 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5690d819-973e-4ded-a4d5-b6e7bb691c54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554215 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554266 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554289 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bln\" (UniqueName: \"kubernetes.io/projected/6b098eac-8578-4bea-ae1d-af41fc24e2b7-kube-api-access-54bln\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554346 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554371 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgc2\" (UniqueName: \"kubernetes.io/projected/21458848-8fef-4265-a871-d5cdb5ac937e-kube-api-access-4mgc2\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.554474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.554724 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.054713484 +0000 UTC m=+142.959759428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.557232 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" event={"ID":"d2c09848-ef88-4c0a-8ae5-fd0e9885956c","Type":"ContainerStarted","Data":"569a8812f160d01aa61146bdfe3779794d205a76b5153a5d16ee6cbca9a7b6f8"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.557311 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" event={"ID":"d2c09848-ef88-4c0a-8ae5-fd0e9885956c","Type":"ContainerStarted","Data":"de5038cf20cfa96d8dddb608fc58d5adc84746130f197d0f5ac0d3f3f4ce3525"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.558840 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.560154 4821 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qrpnr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.560213 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.561698 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" event={"ID":"74612077-5860-4d75-8655-e48515893c20","Type":"ContainerStarted","Data":"211eef8d12d3c373a2482b95a09fd4d90fa3baae07b973bedd4f493eee7e5071"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.563027 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.586366 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.597198 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lzvgr" event={"ID":"08d6cb47-472a-4bda-bfc0-738029e84e40","Type":"ContainerStarted","Data":"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.597238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lzvgr" event={"ID":"08d6cb47-472a-4bda-bfc0-738029e84e40","Type":"ContainerStarted","Data":"c1ce6d77d61b7eb78bac2195584ecc11d9d07d017a73197ef559965aa7abc73e"} Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.602929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a-serving-cert\") pod \"apiserver-76f77b778f-jsll6\" (UID: \"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a\") " pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.632496 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.655702 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.655939 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54bln\" (UniqueName: \"kubernetes.io/projected/6b098eac-8578-4bea-ae1d-af41fc24e2b7-kube-api-access-54bln\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.655997 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjt2c\" (UniqueName: \"kubernetes.io/projected/7110b719-7cb2-4f82-a854-de726147673c-kube-api-access-wjt2c\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656024 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-registration-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.656072 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.156049146 +0000 UTC m=+143.061095090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656130 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d421ef87-a39c-479a-8d5e-06f9a3824e63-config-volume\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656355 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2191ace4-8f37-45cd-91da-64f22f294a7f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656511 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgc2\" (UniqueName: \"kubernetes.io/projected/21458848-8fef-4265-a871-d5cdb5ac937e-kube-api-access-4mgc2\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656553 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxs7\" (UniqueName: \"kubernetes.io/projected/3b470461-fc34-4822-a4e6-cba4c761712c-kube-api-access-lgxs7\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.656618 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-625s5\" (UniqueName: \"kubernetes.io/projected/0d280380-3356-4a2a-b102-58101cfdc627-kube-api-access-625s5\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657235 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgtx5\" (UniqueName: \"kubernetes.io/projected/d421ef87-a39c-479a-8d5e-06f9a3824e63-kube-api-access-hgtx5\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657375 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b49d\" (UniqueName: \"kubernetes.io/projected/13461907-7755-4774-a0bf-379395ce0c19-kube-api-access-8b49d\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657399 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-srv-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657479 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-plugins-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657575 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657599 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-csi-data-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657621 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-default-certificate\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657648 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b098eac-8578-4bea-ae1d-af41fc24e2b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrxl\" (UniqueName: \"kubernetes.io/projected/e116d50f-86db-4a6c-bb71-938d87196a40-kube-api-access-lzrxl\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657739 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d421ef87-a39c-479a-8d5e-06f9a3824e63-metrics-tls\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657768 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-socket-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657826 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-webhook-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657875 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-node-bootstrap-token\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657895 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657912 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5690d819-973e-4ded-a4d5-b6e7bb691c54-config\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657925 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-certs\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-metrics-certs\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657956 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-mountpoint-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.657970 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/140e919b-2356-4ff3-a604-76b6320ee714-service-ca-bundle\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.658017 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.658031 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7v5j\" (UniqueName: \"kubernetes.io/projected/cc5f98a2-b919-47de-ad54-afc926b1f1a4-kube-api-access-k7v5j\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.658048 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsl8\" (UniqueName: \"kubernetes.io/projected/140e919b-2356-4ff3-a604-76b6320ee714-kube-api-access-5vsl8\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.658051 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.658066 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw5tz\" (UniqueName: \"kubernetes.io/projected/9c00679e-3b54-44b8-a074-2e036e4fdcbd-kube-api-access-sw5tz\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.660042 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9lz\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.660075 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f223df8f-2c89-4ee1-b655-50e73389cb2d-cert\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.660219 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5690d819-973e-4ded-a4d5-b6e7bb691c54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.660244 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.664325 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5690d819-973e-4ded-a4d5-b6e7bb691c54-config\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.670825 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.672002 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.660355 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675290 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21458848-8fef-4265-a871-d5cdb5ac937e-config\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675327 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21458848-8fef-4265-a871-d5cdb5ac937e-serving-cert\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675376 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13461907-7755-4774-a0bf-379395ce0c19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675410 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7g5\" (UniqueName: \"kubernetes.io/projected/f223df8f-2c89-4ee1-b655-50e73389cb2d-kube-api-access-6p7g5\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675425 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-key\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675457 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13461907-7755-4774-a0bf-379395ce0c19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675473 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpt8\" (UniqueName: \"kubernetes.io/projected/ead30b73-d782-4df8-b4e7-137d42ec6862-kube-api-access-fzpt8\") pod \"migrator-59844c95c7-5wqbt\" (UID: \"ead30b73-d782-4df8-b4e7-137d42ec6862\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675492 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de931745-8fd0-4f2b-9658-6c52d53a1b4e-trusted-ca\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675520 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e116d50f-86db-4a6c-bb71-938d87196a40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675536 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-apiservice-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675611 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5690d819-973e-4ded-a4d5-b6e7bb691c54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675629 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nvw\" (UniqueName: \"kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675647 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675663 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-stats-auth\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675686 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675715 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3b470461-fc34-4822-a4e6-cba4c761712c-tmpfs\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675737 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692s7\" (UniqueName: \"kubernetes.io/projected/2191ace4-8f37-45cd-91da-64f22f294a7f-kube-api-access-692s7\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675774 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-cabundle\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675802 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675821 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de931745-8fd0-4f2b-9658-6c52d53a1b4e-metrics-tls\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675846 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znc8q\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-kube-api-access-znc8q\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.675864 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.676763 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21458848-8fef-4265-a871-d5cdb5ac937e-config\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.677493 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.17748141 +0000 UTC m=+143.082527354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.682043 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b098eac-8578-4bea-ae1d-af41fc24e2b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.688999 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.691242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21458848-8fef-4265-a871-d5cdb5ac937e-serving-cert\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.694429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e116d50f-86db-4a6c-bb71-938d87196a40-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.694626 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.698942 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5690d819-973e-4ded-a4d5-b6e7bb691c54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.703640 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54bln\" (UniqueName: \"kubernetes.io/projected/6b098eac-8578-4bea-ae1d-af41fc24e2b7-kube-api-access-54bln\") pod \"control-plane-machine-set-operator-78cbb6b69f-crvsh\" (UID: \"6b098eac-8578-4bea-ae1d-af41fc24e2b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.705946 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.712273 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.718025 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.720516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgc2\" (UniqueName: \"kubernetes.io/projected/21458848-8fef-4265-a871-d5cdb5ac937e-kube-api-access-4mgc2\") pod \"service-ca-operator-777779d784-ftrsr\" (UID: \"21458848-8fef-4265-a871-d5cdb5ac937e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.734058 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.739087 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.741310 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.761418 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9lz\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.776918 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777083 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3b470461-fc34-4822-a4e6-cba4c761712c-tmpfs\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.777213 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.277170302 +0000 UTC m=+143.182216246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777257 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692s7\" (UniqueName: \"kubernetes.io/projected/2191ace4-8f37-45cd-91da-64f22f294a7f-kube-api-access-692s7\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777320 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-cabundle\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777370 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de931745-8fd0-4f2b-9658-6c52d53a1b4e-metrics-tls\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777395 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znc8q\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-kube-api-access-znc8q\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777415 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777468 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjt2c\" (UniqueName: \"kubernetes.io/projected/7110b719-7cb2-4f82-a854-de726147673c-kube-api-access-wjt2c\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777491 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-registration-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777508 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d421ef87-a39c-479a-8d5e-06f9a3824e63-config-volume\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777553 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2191ace4-8f37-45cd-91da-64f22f294a7f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777570 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxs7\" (UniqueName: \"kubernetes.io/projected/3b470461-fc34-4822-a4e6-cba4c761712c-kube-api-access-lgxs7\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777610 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-625s5\" (UniqueName: \"kubernetes.io/projected/0d280380-3356-4a2a-b102-58101cfdc627-kube-api-access-625s5\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777635 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgtx5\" (UniqueName: \"kubernetes.io/projected/d421ef87-a39c-479a-8d5e-06f9a3824e63-kube-api-access-hgtx5\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777656 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b49d\" (UniqueName: \"kubernetes.io/projected/13461907-7755-4774-a0bf-379395ce0c19-kube-api-access-8b49d\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777689 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-srv-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777734 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-plugins-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777890 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-csi-data-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.777919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-default-certificate\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778053 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d421ef87-a39c-479a-8d5e-06f9a3824e63-metrics-tls\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778113 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-socket-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778147 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-webhook-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-node-bootstrap-token\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778304 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-certs\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778322 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-metrics-certs\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-mountpoint-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778766 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/140e919b-2356-4ff3-a604-76b6320ee714-service-ca-bundle\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778820 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-registration-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778889 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7v5j\" (UniqueName: \"kubernetes.io/projected/cc5f98a2-b919-47de-ad54-afc926b1f1a4-kube-api-access-k7v5j\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778911 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsl8\" (UniqueName: \"kubernetes.io/projected/140e919b-2356-4ff3-a604-76b6320ee714-kube-api-access-5vsl8\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778929 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw5tz\" (UniqueName: \"kubernetes.io/projected/9c00679e-3b54-44b8-a074-2e036e4fdcbd-kube-api-access-sw5tz\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.778958 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-mountpoint-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779045 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f223df8f-2c89-4ee1-b655-50e73389cb2d-cert\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779059 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-cabundle\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779074 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779207 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13461907-7755-4774-a0bf-379395ce0c19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779359 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-key\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779382 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p7g5\" (UniqueName: \"kubernetes.io/projected/f223df8f-2c89-4ee1-b655-50e73389cb2d-kube-api-access-6p7g5\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779399 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13461907-7755-4774-a0bf-379395ce0c19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779723 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpt8\" (UniqueName: \"kubernetes.io/projected/ead30b73-d782-4df8-b4e7-137d42ec6862-kube-api-access-fzpt8\") pod \"migrator-59844c95c7-5wqbt\" (UID: \"ead30b73-d782-4df8-b4e7-137d42ec6862\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.779882 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de931745-8fd0-4f2b-9658-6c52d53a1b4e-trusted-ca\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.780019 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d421ef87-a39c-479a-8d5e-06f9a3824e63-config-volume\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.780036 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-apiservice-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.780152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nvw\" (UniqueName: \"kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.780178 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-stats-auth\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.780200 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.781374 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-socket-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.781665 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/140e919b-2356-4ff3-a604-76b6320ee714-service-ca-bundle\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.782062 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-csi-data-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.782130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7110b719-7cb2-4f82-a854-de726147673c-plugins-dir\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.789521 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13461907-7755-4774-a0bf-379395ce0c19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.790519 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de931745-8fd0-4f2b-9658-6c52d53a1b4e-metrics-tls\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.791368 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.791867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-default-certificate\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.795581 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de931745-8fd0-4f2b-9658-6c52d53a1b4e-trusted-ca\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.796790 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f223df8f-2c89-4ee1-b655-50e73389cb2d-cert\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.797383 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-node-bootstrap-token\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.798696 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-apiservice-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.799185 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.799737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5690d819-973e-4ded-a4d5-b6e7bb691c54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v2bg8\" (UID: \"5690d819-973e-4ded-a4d5-b6e7bb691c54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.799785 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d421ef87-a39c-479a-8d5e-06f9a3824e63-metrics-tls\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.799874 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b470461-fc34-4822-a4e6-cba4c761712c-webhook-cert\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.800320 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrxl\" (UniqueName: \"kubernetes.io/projected/e116d50f-86db-4a6c-bb71-938d87196a40-kube-api-access-lzrxl\") pod \"multus-admission-controller-857f4d67dd-rmps9\" (UID: \"e116d50f-86db-4a6c-bb71-938d87196a40\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.800377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0d280380-3356-4a2a-b102-58101cfdc627-srv-cert\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.801648 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-metrics-certs\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.805153 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc5f98a2-b919-47de-ad54-afc926b1f1a4-certs\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.807477 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2191ace4-8f37-45cd-91da-64f22f294a7f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.807680 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/140e919b-2356-4ff3-a604-76b6320ee714-stats-auth\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.810589 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13461907-7755-4774-a0bf-379395ce0c19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.813584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3b470461-fc34-4822-a4e6-cba4c761712c-tmpfs\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.817139 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.828286 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9c00679e-3b54-44b8-a074-2e036e4fdcbd-signing-key\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.837732 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjt2c\" (UniqueName: \"kubernetes.io/projected/7110b719-7cb2-4f82-a854-de726147673c-kube-api-access-wjt2c\") pod \"csi-hostpathplugin-k25bw\" (UID: \"7110b719-7cb2-4f82-a854-de726147673c\") " pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.863027 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692s7\" (UniqueName: \"kubernetes.io/projected/2191ace4-8f37-45cd-91da-64f22f294a7f-kube-api-access-692s7\") pod \"package-server-manager-789f6589d5-4qvdx\" (UID: \"2191ace4-8f37-45cd-91da-64f22f294a7f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.874064 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.882441 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.882880 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.382868653 +0000 UTC m=+143.287914597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.887730 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-625s5\" (UniqueName: \"kubernetes.io/projected/0d280380-3356-4a2a-b102-58101cfdc627-kube-api-access-625s5\") pod \"olm-operator-6b444d44fb-m2s62\" (UID: \"0d280380-3356-4a2a-b102-58101cfdc627\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.908962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znc8q\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-kube-api-access-znc8q\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.926843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgtx5\" (UniqueName: \"kubernetes.io/projected/d421ef87-a39c-479a-8d5e-06f9a3824e63-kube-api-access-hgtx5\") pod \"dns-default-4c49j\" (UID: \"d421ef87-a39c-479a-8d5e-06f9a3824e63\") " pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.953700 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b49d\" (UniqueName: \"kubernetes.io/projected/13461907-7755-4774-a0bf-379395ce0c19-kube-api-access-8b49d\") pod \"kube-storage-version-migrator-operator-b67b599dd-t2wjt\" (UID: \"13461907-7755-4774-a0bf-379395ce0c19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.957973 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b"] Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.980855 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxs7\" (UniqueName: \"kubernetes.io/projected/3b470461-fc34-4822-a4e6-cba4c761712c-kube-api-access-lgxs7\") pod \"packageserver-d55dfcdfc-rszth\" (UID: \"3b470461-fc34-4822-a4e6-cba4c761712c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.984190 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.984197 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nvw\" (UniqueName: \"kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw\") pod \"marketplace-operator-79b997595-854tr\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.984582 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.484527425 +0000 UTC m=+143.389573369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:46 crc kubenswrapper[4821]: I0930 17:05:46.985069 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:46 crc kubenswrapper[4821]: E0930 17:05:46.985887 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.485815517 +0000 UTC m=+143.390861461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.001731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7v5j\" (UniqueName: \"kubernetes.io/projected/cc5f98a2-b919-47de-ad54-afc926b1f1a4-kube-api-access-k7v5j\") pod \"machine-config-server-ldwrh\" (UID: \"cc5f98a2-b919-47de-ad54-afc926b1f1a4\") " pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.023694 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.024970 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsl8\" (UniqueName: \"kubernetes.io/projected/140e919b-2356-4ff3-a604-76b6320ee714-kube-api-access-5vsl8\") pod \"router-default-5444994796-qgksb\" (UID: \"140e919b-2356-4ff3-a604-76b6320ee714\") " pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.042649 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.051305 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw5tz\" (UniqueName: \"kubernetes.io/projected/9c00679e-3b54-44b8-a074-2e036e4fdcbd-kube-api-access-sw5tz\") pod \"service-ca-9c57cc56f-f7fvc\" (UID: \"9c00679e-3b54-44b8-a074-2e036e4fdcbd\") " pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.061936 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de931745-8fd0-4f2b-9658-6c52d53a1b4e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-72jp6\" (UID: \"de931745-8fd0-4f2b-9658-6c52d53a1b4e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.074792 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.077750 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.086590 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.087171 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.087520 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.587505559 +0000 UTC m=+143.492551503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.092945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpt8\" (UniqueName: \"kubernetes.io/projected/ead30b73-d782-4df8-b4e7-137d42ec6862-kube-api-access-fzpt8\") pod \"migrator-59844c95c7-5wqbt\" (UID: \"ead30b73-d782-4df8-b4e7-137d42ec6862\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.100188 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.104513 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.111073 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.118579 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.127811 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.133132 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.142534 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.157341 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ldwrh" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.161082 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.173852 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.189899 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.190185 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.690170225 +0000 UTC m=+143.595216169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.202130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p7g5\" (UniqueName: \"kubernetes.io/projected/f223df8f-2c89-4ee1-b655-50e73389cb2d-kube-api-access-6p7g5\") pod \"ingress-canary-hlfh2\" (UID: \"f223df8f-2c89-4ee1-b655-50e73389cb2d\") " pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.247924 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.311738 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.312231 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.812214224 +0000 UTC m=+143.717260168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.377366 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.407020 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.424428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.424883 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:47.924870329 +0000 UTC m=+143.829916273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.452876 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hlfh2" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.469366 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jsll6"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.512685 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.524981 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.525397 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.025383351 +0000 UTC m=+143.930429285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.546353 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9t59s"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.571609 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.594805 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.606543 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.626527 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.626855 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.126843278 +0000 UTC m=+144.031889222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.631960 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" event={"ID":"e4f2fba6-7528-40dc-8c18-6ca44115cf2a","Type":"ContainerStarted","Data":"b25340b7dda4be13a5eebd95bb7b685cf460d4c919878a0ffa246003f720fa97"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.631996 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" event={"ID":"e4f2fba6-7528-40dc-8c18-6ca44115cf2a","Type":"ContainerStarted","Data":"41d6fe67bcd7930628ca3fc574aee28ee34c94dc791afbc9da0ae6c22bc4600c"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.641749 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" event={"ID":"74612077-5860-4d75-8655-e48515893c20","Type":"ContainerStarted","Data":"aa38049927694d257163537bdf8f4b5146151b437db9f65a02b548cedc650a0e"} Sep 30 17:05:47 crc kubenswrapper[4821]: W0930 17:05:47.644237 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod140e919b_2356_4ff3_a604_76b6320ee714.slice/crio-4c51e0588945dda81b50c5e28e169ede3866eb9f61b75e8739705a80c0117203 WatchSource:0}: Error finding container 4c51e0588945dda81b50c5e28e169ede3866eb9f61b75e8739705a80c0117203: Status 404 returned error can't find the container with id 4c51e0588945dda81b50c5e28e169ede3866eb9f61b75e8739705a80c0117203 Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.658962 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" event={"ID":"2f3a466e-6003-48b2-b71d-7047639f3548","Type":"ContainerStarted","Data":"686aca2798d199df62f83c00a32da5201871df00b8c3972675457447fc8e100e"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.671875 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" event={"ID":"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b","Type":"ContainerStarted","Data":"22e0f36a07d550bacc4a95e4c2da4800b61c44ef9d3af4c5314af1ae322aa67f"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.671919 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" event={"ID":"5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b","Type":"ContainerStarted","Data":"f1da838669c947e54a218e5e9a04b3051ffa7205b0996d611119dc52858016c3"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.672512 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.679275 4821 patch_prober.go:28] interesting pod/console-operator-58897d9998-5kj7q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.679338 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" podUID="5bf4a817-fc7e-4b1d-a530-b8280f4f2d0b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.684510 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" event={"ID":"1366fe1c-9d0b-4a6e-bfa4-7ced09204637","Type":"ContainerStarted","Data":"646e230347e9b7cefeaa02cc5a213ebb7d7c99c8c4c124a03dccccb7a954354f"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.686980 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" podStartSLOduration=122.686963304 podStartE2EDuration="2m2.686963304s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:47.686663956 +0000 UTC m=+143.591709900" watchObservedRunningTime="2025-09-30 17:05:47.686963304 +0000 UTC m=+143.592009248" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.716606 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" event={"ID":"0353afa5-86b4-40c4-9633-c75046a0e84d","Type":"ContainerStarted","Data":"41df9de0497cdb27c145bf4a1cf2d29e86e7e648e1c3a5b7f40afd2ee1a103c3"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.721686 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" event={"ID":"dcff15ff-1913-4b35-bb98-0942fc11bdf3","Type":"ContainerStarted","Data":"d59d54398c3595d809376079434685b1156a175cd6200b5c8da250d082fe12e4"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.730437 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.731761 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.231733879 +0000 UTC m=+144.136779823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.732712 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" event={"ID":"d87b4d2e-ced9-47af-8556-ddd4e0d57769","Type":"ContainerStarted","Data":"3bfd72273d222a88f6c009cc85d2510100b7278b8cb6ab9de71f02b7ee6ae7b7"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.735988 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" event={"ID":"fd286c93-29a6-48e6-b22c-fa70d5bf8e21","Type":"ContainerStarted","Data":"d339512a59ba5520694497219d1c95916753e8ad6887deaa8e1fcd8f6c27731b"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.748454 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k25bw"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.759371 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" event={"ID":"eb3298cf-506d-4b81-a283-801f689c5db6","Type":"ContainerStarted","Data":"a97e50ef2530eca27988387e9898a4a30ff38851b94e04da084d807058e3728b"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.782134 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" event={"ID":"ccaea5c4-3efc-48ab-8159-6db6f5f77555","Type":"ContainerStarted","Data":"78128781a768e96fbfdaba2dbf06e77ee79e75018175f238080586d255a7343f"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.786866 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" event={"ID":"0e281506-b9d7-4e26-964f-e472f7f2661f","Type":"ContainerStarted","Data":"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.787343 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.797070 4821 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bsh9r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.797129 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.836282 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" event={"ID":"4241c54f-7e90-4a5b-91d2-4904fd633c35","Type":"ContainerStarted","Data":"9796f6bc33af80942aecc309d019445800bae40868a9efad615d96ab91e77096"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.837226 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.838649 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.338636691 +0000 UTC m=+144.243682625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.845001 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" event={"ID":"a617aa70-de94-4903-863e-0b10a2c9253d","Type":"ContainerStarted","Data":"7f62f3ba26698d15c342e88b495257fb894c6f9f1f276c87e2841c3310c8506c"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.851290 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" event={"ID":"21855650-5cbc-49eb-8d6c-d6846546769d","Type":"ContainerStarted","Data":"fcd81bf4c7391bb5fb0a631425e59d664565fe0eb6025592be8c1161f31e91c9"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.852384 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lzvgr" podStartSLOduration=122.852364792 podStartE2EDuration="2m2.852364792s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:47.800285846 +0000 UTC m=+143.705331790" watchObservedRunningTime="2025-09-30 17:05:47.852364792 +0000 UTC m=+143.757410736" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.864013 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4fxjh" event={"ID":"f0fb9646-336c-4014-92ca-bb5caa55dde5","Type":"ContainerStarted","Data":"89112ce2fe2b0afd06420b7745bfd586b23c77eea50b432a59f4afa92dfdc3d5"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.864910 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.892120 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.892173 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.893834 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" event={"ID":"2adde79f-8b11-4ac3-88b6-ea7d2e6c5870","Type":"ContainerStarted","Data":"1d78d63cf3492f9f2f8f6e0123bf77d156cb9c52ab444fff677e966d5b56dca4"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.899787 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" event={"ID":"75516b13-a330-4e17-a2e1-bd1c04ad9500","Type":"ContainerStarted","Data":"b72de9685395e8f3e7e95a9f212d20ef93dfcde20577990c107b40418060c2fc"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.899814 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" event={"ID":"75516b13-a330-4e17-a2e1-bd1c04ad9500","Type":"ContainerStarted","Data":"a4f8ab20a08641a375fb2948d43d826774fcea9feefb26b21289dde36b5dad0a"} Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.938014 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.938171 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.438150878 +0000 UTC m=+144.343196822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.938553 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:47 crc kubenswrapper[4821]: E0930 17:05:47.938927 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.438910037 +0000 UTC m=+144.343955971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.964281 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6tqpl" podStartSLOduration=121.964265979 podStartE2EDuration="2m1.964265979s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:47.92017073 +0000 UTC m=+143.825216684" watchObservedRunningTime="2025-09-30 17:05:47.964265979 +0000 UTC m=+143.869311933" Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.967174 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8"] Sep 30 17:05:47 crc kubenswrapper[4821]: I0930 17:05:47.999183 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.020696 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmps9"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.041506 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.042254 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.542236279 +0000 UTC m=+144.447282223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.122745 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f7fvc"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.149731 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.150125 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.650113735 +0000 UTC m=+144.555159679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.255787 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.256328 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.756312809 +0000 UTC m=+144.661358753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.276583 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.357833 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.358156 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.858144775 +0000 UTC m=+144.763190719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.425899 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wstkc" podStartSLOduration=123.425874831 podStartE2EDuration="2m3.425874831s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:48.421797541 +0000 UTC m=+144.326843485" watchObservedRunningTime="2025-09-30 17:05:48.425874831 +0000 UTC m=+144.330920785" Sep 30 17:05:48 crc kubenswrapper[4821]: W0930 17:05:48.431882 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c00679e_3b54_44b8_a074_2e036e4fdcbd.slice/crio-b02de74763c09af37e95c0a6ec7aa1fec3b069c530f5e7dec54f95905bbb5a8f WatchSource:0}: Error finding container b02de74763c09af37e95c0a6ec7aa1fec3b069c530f5e7dec54f95905bbb5a8f: Status 404 returned error can't find the container with id b02de74763c09af37e95c0a6ec7aa1fec3b069c530f5e7dec54f95905bbb5a8f Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.460233 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.460669 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:48.960650207 +0000 UTC m=+144.865696151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.484620 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.518617 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4c49j"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.569770 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.570176 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.070161714 +0000 UTC m=+144.975207658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.610709 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.660467 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.676001 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.676342 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.176326527 +0000 UTC m=+145.081372471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.729935 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt"] Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.777221 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.777524 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.277512377 +0000 UTC m=+145.182558321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.803934 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wxvjw" podStartSLOduration=123.803897543 podStartE2EDuration="2m3.803897543s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:48.80095043 +0000 UTC m=+144.705996374" watchObservedRunningTime="2025-09-30 17:05:48.803897543 +0000 UTC m=+144.708943487" Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.883914 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.884311 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.384298176 +0000 UTC m=+145.289344120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: W0930 17:05:48.909438 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b470461_fc34_4822_a4e6_cba4c761712c.slice/crio-8ea6561ed64acf2255c0e8afdc29245d557ac571b7d0ae6b559c3ed6c2066928 WatchSource:0}: Error finding container 8ea6561ed64acf2255c0e8afdc29245d557ac571b7d0ae6b559c3ed6c2066928: Status 404 returned error can't find the container with id 8ea6561ed64acf2255c0e8afdc29245d557ac571b7d0ae6b559c3ed6c2066928 Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.968045 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ldwrh" event={"ID":"cc5f98a2-b919-47de-ad54-afc926b1f1a4","Type":"ContainerStarted","Data":"7166a21ba2ed1b1fe04030f5270b121df9c07bce0a674f4740b147bf7b7ea944"} Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.986640 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:48 crc kubenswrapper[4821]: E0930 17:05:48.986946 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.486935951 +0000 UTC m=+145.391981895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.987980 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" event={"ID":"901c3ade-323e-469e-a9dd-6e568baadded","Type":"ContainerStarted","Data":"b85cbe533d2a2fa696f2dec97cd02897653840d3061067bfa1a0d6971097df4b"} Sep 30 17:05:48 crc kubenswrapper[4821]: I0930 17:05:48.988007 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" event={"ID":"901c3ade-323e-469e-a9dd-6e568baadded","Type":"ContainerStarted","Data":"f52413e881abcc9140ebfa343271a2daa0812a7afe33ce8feaa41edbe19c48c8"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.012736 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62"] Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.026062 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" event={"ID":"4241c54f-7e90-4a5b-91d2-4904fd633c35","Type":"ContainerStarted","Data":"4edd7c77c3eb8b6145eee66340c245831f61d016354f17e0bc4e88cf472c9ef9"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.074510 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" event={"ID":"f84add95-1bc2-4534-93aa-bba177335e74","Type":"ContainerStarted","Data":"8a79ecf4d761ccbe7ade9bb0c88cf6d2430052499f690eded76c472c42b5c340"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.084736 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hlfh2"] Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.087660 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.088073 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.588056418 +0000 UTC m=+145.493102362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.123552 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qgksb" event={"ID":"140e919b-2356-4ff3-a604-76b6320ee714","Type":"ContainerStarted","Data":"4c51e0588945dda81b50c5e28e169ede3866eb9f61b75e8739705a80c0117203"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.178437 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" podStartSLOduration=124.178406158 podStartE2EDuration="2m4.178406158s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.140260589 +0000 UTC m=+145.045306533" watchObservedRunningTime="2025-09-30 17:05:49.178406158 +0000 UTC m=+145.083452132" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.189516 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.189987 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.689970856 +0000 UTC m=+145.595016790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.197772 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" event={"ID":"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a","Type":"ContainerStarted","Data":"530a30c42db7f9a7c871c822cddcffa79676c848227ed46591114c138ab09482"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.297496 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.297813 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.79779886 +0000 UTC m=+145.702844804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.298111 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.298348 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.798342025 +0000 UTC m=+145.703387959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.336940 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hn4g9" podStartSLOduration=123.336924335 podStartE2EDuration="2m3.336924335s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.336667968 +0000 UTC m=+145.241713912" watchObservedRunningTime="2025-09-30 17:05:49.336924335 +0000 UTC m=+145.241970279" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.338237 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" podStartSLOduration=123.338229597 podStartE2EDuration="2m3.338229597s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.262269026 +0000 UTC m=+145.167314970" watchObservedRunningTime="2025-09-30 17:05:49.338229597 +0000 UTC m=+145.243275541" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.340886 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" event={"ID":"e116d50f-86db-4a6c-bb71-938d87196a40","Type":"ContainerStarted","Data":"a4994360625492b0e53b1007d5a34d2465c376e48b00ffc8e2f0d4ad2b83636e"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.347204 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" event={"ID":"9c00679e-3b54-44b8-a074-2e036e4fdcbd","Type":"ContainerStarted","Data":"b02de74763c09af37e95c0a6ec7aa1fec3b069c530f5e7dec54f95905bbb5a8f"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.349203 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.349260 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.394439 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" event={"ID":"7110b719-7cb2-4f82-a854-de726147673c","Type":"ContainerStarted","Data":"9b159cb2ea7ae6da816cf7912a5c6f693ea2edc750d68294e4bb5b0f23edeab0"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.399502 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" event={"ID":"21458848-8fef-4265-a871-d5cdb5ac937e","Type":"ContainerStarted","Data":"aaac98d303e8b5e8ad745c06853a19f71013b4a4a3bd5db1d602b55ba82c0170"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.400697 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.401000 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:49.90098424 +0000 UTC m=+145.806030184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.428257 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx"] Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.435480 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" event={"ID":"5690d819-973e-4ded-a4d5-b6e7bb691c54","Type":"ContainerStarted","Data":"6671f4beae0bb975728b0c8490c751d7c8e36a6873ec9e6f9a93896f7d0f63e5"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.446110 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4c49j" event={"ID":"d421ef87-a39c-479a-8d5e-06f9a3824e63","Type":"ContainerStarted","Data":"f86e01833c61ccf39cc568966478789373ff76517db19b2500a368620d9d5576"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.459497 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" event={"ID":"ead30b73-d782-4df8-b4e7-137d42ec6862","Type":"ContainerStarted","Data":"b9dc6b96326ccc833d11566fe6c3856535c3a7e3bf29514fe2d0770faa5e989f"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.504437 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.504791 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.004778744 +0000 UTC m=+145.909824688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.545214 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" event={"ID":"56fb6d19-7b78-4122-9989-0676a86c33dd","Type":"ContainerStarted","Data":"ace4499a96e50b77e821caea7d1d7f9e1ee56b0f0342bc76f55ae98eecaf7be9"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.563613 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zhl6b" podStartSLOduration=123.563598249 podStartE2EDuration="2m3.563598249s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.563026595 +0000 UTC m=+145.468072539" watchObservedRunningTime="2025-09-30 17:05:49.563598249 +0000 UTC m=+145.468644193" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.572471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" event={"ID":"fd286c93-29a6-48e6-b22c-fa70d5bf8e21","Type":"ContainerStarted","Data":"3783dcabcd09a6d009c64e6f8ec0e56bd1a7e9cb91fee4563a7da046357bd911"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.578584 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" event={"ID":"6b098eac-8578-4bea-ae1d-af41fc24e2b7","Type":"ContainerStarted","Data":"327184a99c340a88024eb7d6f8c7bbc6fcd467d770cc18cd11dd3b4cad72a378"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.579757 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" event={"ID":"de931745-8fd0-4f2b-9658-6c52d53a1b4e","Type":"ContainerStarted","Data":"6a84f6f336b75d4b8ead2afd4dee7b0b1c7ada536872984cb080167f37b0eb55"} Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.583935 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.583979 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.594297 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4fxjh" podStartSLOduration=123.594282083 podStartE2EDuration="2m3.594282083s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.592706834 +0000 UTC m=+145.497752778" watchObservedRunningTime="2025-09-30 17:05:49.594282083 +0000 UTC m=+145.499328027" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.599016 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.605858 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.606243 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.10623124 +0000 UTC m=+146.011277184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.622355 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rbgqj" podStartSLOduration=124.622339061 podStartE2EDuration="2m4.622339061s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.613607614 +0000 UTC m=+145.518653558" watchObservedRunningTime="2025-09-30 17:05:49.622339061 +0000 UTC m=+145.527385005" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.707781 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.737478 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.237463387 +0000 UTC m=+146.142509331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.773553 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-nhz6b" podStartSLOduration=123.773533806 podStartE2EDuration="2m3.773533806s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.730085784 +0000 UTC m=+145.635131728" watchObservedRunningTime="2025-09-30 17:05:49.773533806 +0000 UTC m=+145.678579750" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.802807 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" podStartSLOduration=124.802786864 podStartE2EDuration="2m4.802786864s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:49.773120256 +0000 UTC m=+145.678166210" watchObservedRunningTime="2025-09-30 17:05:49.802786864 +0000 UTC m=+145.707832808" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.810072 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.810473 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.310454144 +0000 UTC m=+146.215500088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.847408 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5kj7q" Sep 30 17:05:49 crc kubenswrapper[4821]: I0930 17:05:49.918605 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:49 crc kubenswrapper[4821]: E0930 17:05:49.919176 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.419163051 +0000 UTC m=+146.324208995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.019808 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.020219 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.520200837 +0000 UTC m=+146.425246781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.121125 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.121460 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.621448277 +0000 UTC m=+146.526494211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.225233 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.226041 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.726024341 +0000 UTC m=+146.631070285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.328073 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.328417 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.82840677 +0000 UTC m=+146.733452714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.428707 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.429442 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:50.929426686 +0000 UTC m=+146.834472630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.530229 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.530641 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.030625246 +0000 UTC m=+146.935671190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.618345 4821 generic.go:334] "Generic (PLEG): container finished" podID="699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a" containerID="bc1fa4e41e8fe4bf29ef45f718a2ed99e3b7ff03e0431d4c36029ba2881592bc" exitCode=0 Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.618409 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" event={"ID":"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a","Type":"ContainerDied","Data":"bc1fa4e41e8fe4bf29ef45f718a2ed99e3b7ff03e0431d4c36029ba2881592bc"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.631107 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.631264 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.131217191 +0000 UTC m=+147.036263135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.631388 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.631694 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.131686992 +0000 UTC m=+147.036732936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.647870 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" event={"ID":"901c3ade-323e-469e-a9dd-6e568baadded","Type":"ContainerStarted","Data":"5fb9f511dfb11b2a0e2abc44aff384b8df15c7c532792aa22bdc6f973d82fc55"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.649633 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" event={"ID":"2191ace4-8f37-45cd-91da-64f22f294a7f","Type":"ContainerStarted","Data":"aef8efdcc5eb077085738774ef52c192096ab3fbf7defd47eac0cfd8f343c242"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.660093 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" event={"ID":"21458848-8fef-4265-a871-d5cdb5ac937e","Type":"ContainerStarted","Data":"5747081c34cc82b67a145a15d05049f4789fdf3f0bc4d2f63f0442017ed5ee18"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.679916 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ldwrh" event={"ID":"cc5f98a2-b919-47de-ad54-afc926b1f1a4","Type":"ContainerStarted","Data":"ab8f8466851d625f3fcef2326df4f0197a69f67596f87f43282acf9da440c180"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.699453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" event={"ID":"4241c54f-7e90-4a5b-91d2-4904fd633c35","Type":"ContainerStarted","Data":"a8385b23c203566e03431da8c58ed0942c8de34e951a5ef8e175eae68c35aacc"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.715516 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" event={"ID":"74612077-5860-4d75-8655-e48515893c20","Type":"ContainerStarted","Data":"b45b59f024a259e39362ff859654abe7c88633cf4054c026faa34c0cf3b7d199"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.717150 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lw2j2" podStartSLOduration=124.71714017 podStartE2EDuration="2m4.71714017s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:50.712309199 +0000 UTC m=+146.617355153" watchObservedRunningTime="2025-09-30 17:05:50.71714017 +0000 UTC m=+146.622186114" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.733279 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.734158 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.234126802 +0000 UTC m=+147.139172746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.748262 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" event={"ID":"e116d50f-86db-4a6c-bb71-938d87196a40","Type":"ContainerStarted","Data":"83635bfbcd05a2dc70692cca304c8134283206cb355b339250a141afce08b54b"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.759280 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftrsr" podStartSLOduration=124.759265968 podStartE2EDuration="2m4.759265968s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:50.751577656 +0000 UTC m=+146.656623600" watchObservedRunningTime="2025-09-30 17:05:50.759265968 +0000 UTC m=+146.664311912" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.768113 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" event={"ID":"21855650-5cbc-49eb-8d6c-d6846546769d","Type":"ContainerStarted","Data":"68304afa8da4039e46af7da355fdefcaee669ebca6a72b5d062fbe2975731e13"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.769116 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.770057 4821 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtlgs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.770112 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" podUID="21855650-5cbc-49eb-8d6c-d6846546769d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.771987 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" event={"ID":"e4f2fba6-7528-40dc-8c18-6ca44115cf2a","Type":"ContainerStarted","Data":"b47ee6804543d00496f0d13551760045f7ad6795093ed2fd7b0799605b7675e1"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.782532 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" event={"ID":"3b470461-fc34-4822-a4e6-cba4c761712c","Type":"ContainerStarted","Data":"8ea6561ed64acf2255c0e8afdc29245d557ac571b7d0ae6b559c3ed6c2066928"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.783773 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.784812 4821 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rszth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.784851 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" podUID="3b470461-fc34-4822-a4e6-cba4c761712c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.787874 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" event={"ID":"9c00679e-3b54-44b8-a074-2e036e4fdcbd","Type":"ContainerStarted","Data":"6fff712dd79f8a3649ed6f203fcf0fbe153348475067d032363a57fbb69456ff"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.792455 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" event={"ID":"0d280380-3356-4a2a-b102-58101cfdc627","Type":"ContainerStarted","Data":"b20cc8e3cb74301ac5c6d1bccc93dfc1501a64dc30479cd5dee420417a356125"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.795109 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" event={"ID":"75516b13-a330-4e17-a2e1-bd1c04ad9500","Type":"ContainerStarted","Data":"78501e85aa7203cced4095eef4dbd8c0c6c260de1d90e53fdde519a87a5ecbbc"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.814954 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" event={"ID":"13461907-7755-4774-a0bf-379395ce0c19","Type":"ContainerStarted","Data":"0ca223c8cd2d5cb07c6dc461a8a95f2f749dd493125809675a321a32e947cfd9"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.819790 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ldwrh" podStartSLOduration=6.819775485 podStartE2EDuration="6.819775485s" podCreationTimestamp="2025-09-30 17:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:50.818302798 +0000 UTC m=+146.723348742" watchObservedRunningTime="2025-09-30 17:05:50.819775485 +0000 UTC m=+146.724821429" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.832418 4821 generic.go:334] "Generic (PLEG): container finished" podID="a617aa70-de94-4903-863e-0b10a2c9253d" containerID="ab69d6cdd4a2333a17fcbc6f597d84da588a8412fb8a590fc652b1017e0ffe7d" exitCode=0 Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.832483 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" event={"ID":"a617aa70-de94-4903-863e-0b10a2c9253d","Type":"ContainerDied","Data":"ab69d6cdd4a2333a17fcbc6f597d84da588a8412fb8a590fc652b1017e0ffe7d"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.835894 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.836177 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.336164623 +0000 UTC m=+147.241210577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.892752 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" event={"ID":"de931745-8fd0-4f2b-9658-6c52d53a1b4e","Type":"ContainerStarted","Data":"6a53e100b1bb6b999a327c97cce9ba09b85985881edc2c938c14232639db0457"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.906044 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qgksb" event={"ID":"140e919b-2356-4ff3-a604-76b6320ee714","Type":"ContainerStarted","Data":"c630385447b5de9fd4df027327edc439ffeb08be0ed8e319a9dee96b17fc08e5"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.907104 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-q92lq" podStartSLOduration=124.907071269 podStartE2EDuration="2m4.907071269s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:50.905215552 +0000 UTC m=+146.810261496" watchObservedRunningTime="2025-09-30 17:05:50.907071269 +0000 UTC m=+146.812117213" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.907464 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wvc92" podStartSLOduration=124.907458448 podStartE2EDuration="2m4.907458448s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:50.86134385 +0000 UTC m=+146.766389794" watchObservedRunningTime="2025-09-30 17:05:50.907458448 +0000 UTC m=+146.812504402" Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.909163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" event={"ID":"56fb6d19-7b78-4122-9989-0676a86c33dd","Type":"ContainerStarted","Data":"c90ccf86686cd276de0d6201e7afea24a7f6056f7c35f9fab5e19810fa5e7ff4"} Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.937227 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:50 crc kubenswrapper[4821]: E0930 17:05:50.938310 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.438293356 +0000 UTC m=+147.343339300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:50 crc kubenswrapper[4821]: I0930 17:05:50.988354 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hlfh2" event={"ID":"f223df8f-2c89-4ee1-b655-50e73389cb2d","Type":"ContainerStarted","Data":"14f38ee20b686f796d398fd21d5c35a9a6119000a0187e45f03d9a1ff5c24a85"} Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.017712 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f7fvc" podStartSLOduration=125.017696833 podStartE2EDuration="2m5.017696833s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.015848456 +0000 UTC m=+146.920894400" watchObservedRunningTime="2025-09-30 17:05:51.017696833 +0000 UTC m=+146.922742767" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.042501 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.068667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" event={"ID":"6b098eac-8578-4bea-ae1d-af41fc24e2b7","Type":"ContainerStarted","Data":"83e9396fc2f0e834bad473d4082a27bf6e5ac434e3ed96a5ea6f5c26668852c3"} Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.068873 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.568853356 +0000 UTC m=+147.473899300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.090498 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" event={"ID":"dcff15ff-1913-4b35-bb98-0942fc11bdf3","Type":"ContainerStarted","Data":"62efb9cb80a9c6b9d3cc7aa91470300821c6aceaf10a05e0b2bd51024adc206d"} Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.105779 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.108881 4821 generic.go:334] "Generic (PLEG): container finished" podID="ccaea5c4-3efc-48ab-8159-6db6f5f77555" containerID="b7f3f85d5b91b6cc35fd437ab9b7cc4ec711dd15cb0cde6027326cddc240e280" exitCode=0 Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.108957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" event={"ID":"ccaea5c4-3efc-48ab-8159-6db6f5f77555","Type":"ContainerDied","Data":"b7f3f85d5b91b6cc35fd437ab9b7cc4ec711dd15cb0cde6027326cddc240e280"} Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.118499 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:51 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:51 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:51 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.118611 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.137272 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" event={"ID":"0353afa5-86b4-40c4-9633-c75046a0e84d","Type":"ContainerStarted","Data":"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1"} Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.137519 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.138666 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.138690 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.143316 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.143795 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.643779152 +0000 UTC m=+147.548825096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.143938 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.144172 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.644165982 +0000 UTC m=+147.549211926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.153057 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gpsch" podStartSLOduration=125.153038332 podStartE2EDuration="2m5.153038332s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.068696342 +0000 UTC m=+146.973742286" watchObservedRunningTime="2025-09-30 17:05:51.153038332 +0000 UTC m=+147.058084276" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.154104 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.155919 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k5fcf" podStartSLOduration=126.155873013 podStartE2EDuration="2m6.155873013s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.153454763 +0000 UTC m=+147.058500707" watchObservedRunningTime="2025-09-30 17:05:51.155873013 +0000 UTC m=+147.060918957" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.231723 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" podStartSLOduration=125.2316947 podStartE2EDuration="2m5.2316947s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.227938618 +0000 UTC m=+147.132984562" watchObservedRunningTime="2025-09-30 17:05:51.2316947 +0000 UTC m=+147.136740644" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.251056 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.252485 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.752468928 +0000 UTC m=+147.657514882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.287386 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" podStartSLOduration=125.287371087 podStartE2EDuration="2m5.287371087s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.279761428 +0000 UTC m=+147.184807362" watchObservedRunningTime="2025-09-30 17:05:51.287371087 +0000 UTC m=+147.192417031" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.350442 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gwrn" podStartSLOduration=125.350400306 podStartE2EDuration="2m5.350400306s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.34249608 +0000 UTC m=+147.247542024" watchObservedRunningTime="2025-09-30 17:05:51.350400306 +0000 UTC m=+147.255446250" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.353638 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.354053 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.854041797 +0000 UTC m=+147.759087741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.390492 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" podStartSLOduration=125.390460474 podStartE2EDuration="2m5.390460474s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.38829059 +0000 UTC m=+147.293336534" watchObservedRunningTime="2025-09-30 17:05:51.390460474 +0000 UTC m=+147.295506418" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.454836 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.455030 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.955001041 +0000 UTC m=+147.860046985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.455323 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.455673 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:51.955666197 +0000 UTC m=+147.860712131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.562522 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.562626 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-crvsh" podStartSLOduration=125.56261163 podStartE2EDuration="2m5.56261163s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.5509682 +0000 UTC m=+147.456014144" watchObservedRunningTime="2025-09-30 17:05:51.56261163 +0000 UTC m=+147.467657574" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.562617 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.06260218 +0000 UTC m=+147.967648124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.562812 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.563141 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.063133603 +0000 UTC m=+147.968179547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.590078 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qgksb" podStartSLOduration=125.590062913 podStartE2EDuration="2m5.590062913s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:51.587466068 +0000 UTC m=+147.492512002" watchObservedRunningTime="2025-09-30 17:05:51.590062913 +0000 UTC m=+147.495108857" Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.663940 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.664097 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.164063436 +0000 UTC m=+148.069109380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.664204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.664452 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.164442805 +0000 UTC m=+148.069488749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.768175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.768503 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.268489226 +0000 UTC m=+148.173535170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.869251 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.869615 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.369604294 +0000 UTC m=+148.274650238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:51 crc kubenswrapper[4821]: I0930 17:05:51.970540 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:51 crc kubenswrapper[4821]: E0930 17:05:51.970846 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.470831584 +0000 UTC m=+148.375877528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.071539 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.071885 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.571873429 +0000 UTC m=+148.476919373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.109141 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:52 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:52 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:52 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.109200 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.145048 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" event={"ID":"a617aa70-de94-4903-863e-0b10a2c9253d","Type":"ContainerStarted","Data":"947c7b3854c47b761a859a4ee76a695c22c747d2de3ef4c46fbbfc05d9c69fb2"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.145127 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.146531 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" event={"ID":"f84add95-1bc2-4534-93aa-bba177335e74","Type":"ContainerStarted","Data":"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.146715 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.147746 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" event={"ID":"13461907-7755-4774-a0bf-379395ce0c19","Type":"ContainerStarted","Data":"074f3a3c847e8a4aa81d152e14860f41d4e16d671fcfd1c7d8d12c8bb13e5ea9"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.148402 4821 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-854tr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.148442 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.150167 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" event={"ID":"7110b719-7cb2-4f82-a854-de726147673c","Type":"ContainerStarted","Data":"defd729fc630670e47622a3525d8a44da60c52f26d4cbf98c4048a8fa884e0a5"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.152875 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" event={"ID":"ccaea5c4-3efc-48ab-8159-6db6f5f77555","Type":"ContainerStarted","Data":"7912463327db6f4bb161750cf6e228f9bdcc80071ab0855f5ddac847852fc865"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.155053 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" event={"ID":"e116d50f-86db-4a6c-bb71-938d87196a40","Type":"ContainerStarted","Data":"d350edc015cfb738629b98d80691638ecdc4329895cef34617eccb4949f2418f"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.156940 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hlfh2" event={"ID":"f223df8f-2c89-4ee1-b655-50e73389cb2d","Type":"ContainerStarted","Data":"6fc021263b0f1724a0a979267ac9381aae4fab4d88855c577c94293a3f00540c"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.158883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" event={"ID":"de931745-8fd0-4f2b-9658-6c52d53a1b4e","Type":"ContainerStarted","Data":"38518f3bc092ae5ef9349a2ad5ca477db836388bdc13dc27bc4fe7dfe384d09b"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.160453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" event={"ID":"0d280380-3356-4a2a-b102-58101cfdc627","Type":"ContainerStarted","Data":"11dde3e6a957dd0de2c797dafa6bbb92821de27a976cdcbcbd9fa6e0ca16577b"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.161187 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.162054 4821 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m2s62 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.162100 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" podUID="0d280380-3356-4a2a-b102-58101cfdc627" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.163141 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" event={"ID":"ead30b73-d782-4df8-b4e7-137d42ec6862","Type":"ContainerStarted","Data":"fe1a9c4fa19b282b38b8a7333267ada88e1f081d4c5e5421c4b7add966ad1ea1"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.163170 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" event={"ID":"ead30b73-d782-4df8-b4e7-137d42ec6862","Type":"ContainerStarted","Data":"7918a1f94e679df4543b5edb30558e09390cdbefe54962cc421271cbe9b26cca"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.165493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" event={"ID":"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a","Type":"ContainerStarted","Data":"6835a3f0864d7e55fe59b71eb633c200b44b938b8eda64a27a4f8eb59ab2bf6f"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.165530 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" event={"ID":"699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a","Type":"ContainerStarted","Data":"a6be64eaec7fc3d99d7868d7ca2de6bb5a787d80a4708347915fa685be38c0c7"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.167307 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" event={"ID":"3b470461-fc34-4822-a4e6-cba4c761712c","Type":"ContainerStarted","Data":"9574a7484325e8f9d9e81a939786e79835f8080f14ca049a53ed7b5ea2a736ad"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.167718 4821 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rszth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.167749 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" podUID="3b470461-fc34-4822-a4e6-cba4c761712c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.169279 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4c49j" event={"ID":"d421ef87-a39c-479a-8d5e-06f9a3824e63","Type":"ContainerStarted","Data":"6a873f073945fd83e78a7fcd2fada0b756e389a84504ed794b6f0af81787420c"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.169305 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4c49j" event={"ID":"d421ef87-a39c-479a-8d5e-06f9a3824e63","Type":"ContainerStarted","Data":"80b677409ef725ac6da83a13cc2cc566fe9987939e3eb36fe25cd2d10f921c52"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.169672 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4c49j" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.171333 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" event={"ID":"2191ace4-8f37-45cd-91da-64f22f294a7f","Type":"ContainerStarted","Data":"33dc4c45c9e1c3a655b213d811f36fb78cfa06096ecee0bf9ebccf9ad78087b1"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.171365 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" event={"ID":"2191ace4-8f37-45cd-91da-64f22f294a7f","Type":"ContainerStarted","Data":"411c53418fa8050d2e4e2e315ee1bdf37b160cd33029b9754ebf5cce1f8f2f64"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.171849 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.172191 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.172857 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.672840024 +0000 UTC m=+148.577885968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.176526 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" event={"ID":"5690d819-973e-4ded-a4d5-b6e7bb691c54","Type":"ContainerStarted","Data":"9fe30abebcad98fadc58ab5d2e6122e0f0b85ade0b8dd1221be22451761a4b1c"} Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.211611 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" podStartSLOduration=127.211591588 podStartE2EDuration="2m7.211591588s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.202280597 +0000 UTC m=+148.107326541" watchObservedRunningTime="2025-09-30 17:05:52.211591588 +0000 UTC m=+148.116637532" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.251177 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v2bg8" podStartSLOduration=126.251160434 podStartE2EDuration="2m6.251160434s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.248337523 +0000 UTC m=+148.153383477" watchObservedRunningTime="2025-09-30 17:05:52.251160434 +0000 UTC m=+148.156206378" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.262968 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtlgs" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.273624 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.283906 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.783891759 +0000 UTC m=+148.688937703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.292851 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" podStartSLOduration=126.292831441 podStartE2EDuration="2m6.292831441s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.292571504 +0000 UTC m=+148.197617448" watchObservedRunningTime="2025-09-30 17:05:52.292831441 +0000 UTC m=+148.197877385" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.370253 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" podStartSLOduration=126.370234669 podStartE2EDuration="2m6.370234669s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.36268246 +0000 UTC m=+148.267728404" watchObservedRunningTime="2025-09-30 17:05:52.370234669 +0000 UTC m=+148.275280613" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.370866 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4c49j" podStartSLOduration=9.370860594 podStartE2EDuration="9.370860594s" podCreationTimestamp="2025-09-30 17:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.341940893 +0000 UTC m=+148.246986847" watchObservedRunningTime="2025-09-30 17:05:52.370860594 +0000 UTC m=+148.275906528" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.375311 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.375513 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.875480928 +0000 UTC m=+148.780526862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.375577 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.375893 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.875885809 +0000 UTC m=+148.780931753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.397302 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-72jp6" podStartSLOduration=126.397287512 podStartE2EDuration="2m6.397287512s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.393524228 +0000 UTC m=+148.298570182" watchObservedRunningTime="2025-09-30 17:05:52.397287512 +0000 UTC m=+148.302333456" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.415667 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5wqbt" podStartSLOduration=126.415652079 podStartE2EDuration="2m6.415652079s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.415077045 +0000 UTC m=+148.320122979" watchObservedRunningTime="2025-09-30 17:05:52.415652079 +0000 UTC m=+148.320698023" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.476615 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.476780 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.976756031 +0000 UTC m=+148.881801975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.476992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.477352 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:52.977345155 +0000 UTC m=+148.882391099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.533408 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" podStartSLOduration=126.53339285 podStartE2EDuration="2m6.53339285s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.462241659 +0000 UTC m=+148.367287593" watchObservedRunningTime="2025-09-30 17:05:52.53339285 +0000 UTC m=+148.438438794" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.564236 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" podStartSLOduration=126.564214398 podStartE2EDuration="2m6.564214398s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.532723664 +0000 UTC m=+148.437769608" watchObservedRunningTime="2025-09-30 17:05:52.564214398 +0000 UTC m=+148.469260342" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.565409 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t2wjt" podStartSLOduration=126.565401667 podStartE2EDuration="2m6.565401667s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.563162731 +0000 UTC m=+148.468208675" watchObservedRunningTime="2025-09-30 17:05:52.565401667 +0000 UTC m=+148.470447611" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.577782 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.577970 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.578526 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.078498383 +0000 UTC m=+148.983544327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.579573 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.679833 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.679879 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.679976 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.680010 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.680758 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.180739419 +0000 UTC m=+149.085785363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.686588 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.689451 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.713080 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.740606 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.741503 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hlfh2" podStartSLOduration=9.741486492 podStartE2EDuration="9.741486492s" podCreationTimestamp="2025-09-30 17:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.642588319 +0000 UTC m=+148.547634263" watchObservedRunningTime="2025-09-30 17:05:52.741486492 +0000 UTC m=+148.646532436" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.780892 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.781238 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.281224051 +0000 UTC m=+149.186269995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.792237 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" podStartSLOduration=127.792219905 podStartE2EDuration="2m7.792219905s" podCreationTimestamp="2025-09-30 17:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.742867906 +0000 UTC m=+148.647913840" watchObservedRunningTime="2025-09-30 17:05:52.792219905 +0000 UTC m=+148.697265849" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.822663 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.829631 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.852546 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmps9" podStartSLOduration=126.852530916 podStartE2EDuration="2m6.852530916s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:52.791869216 +0000 UTC m=+148.696915160" watchObservedRunningTime="2025-09-30 17:05:52.852530916 +0000 UTC m=+148.757576860" Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.882717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.883007 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.382996755 +0000 UTC m=+149.288042689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.984071 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.984235 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.484203975 +0000 UTC m=+149.389249929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:52 crc kubenswrapper[4821]: I0930 17:05:52.984379 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:52 crc kubenswrapper[4821]: E0930 17:05:52.984722 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.484712377 +0000 UTC m=+149.389758361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.046207 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.047398 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: W0930 17:05:53.051555 4821 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.051619 4821 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.064701 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.085919 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.086336 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.586040831 +0000 UTC m=+149.491086775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.086363 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.086400 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tf5r\" (UniqueName: \"kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.086452 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.086492 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.086834 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.586811819 +0000 UTC m=+149.491857763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.104811 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:53 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:53 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:53 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.104903 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.187974 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.188148 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.688123752 +0000 UTC m=+149.593169696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188182 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188265 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188291 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tf5r\" (UniqueName: \"kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188327 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.188589 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.688577793 +0000 UTC m=+149.593623737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.188954 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.194643 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" event={"ID":"7110b719-7cb2-4f82-a854-de726147673c","Type":"ContainerStarted","Data":"457c2fde60b1cb3ec73d7a8780638fea864b8081442f139f77767c1a3ba4d897"} Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.195697 4821 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-854tr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.195734 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.292059 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.297688 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.300052 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tf5r\" (UniqueName: \"kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r\") pod \"certified-operators-4bqx8\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.300916 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.309040 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.809009722 +0000 UTC m=+149.714055666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.309313 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.316151 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.816131649 +0000 UTC m=+149.721177593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.321299 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m2s62" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.337462 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.337843 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.417224 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.418554 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:53.918539039 +0000 UTC m=+149.823584983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.456046 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.468230 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.521516 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.521544 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5sbn\" (UniqueName: \"kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.521567 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.521625 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.521903 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.021890662 +0000 UTC m=+149.926936606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.546014 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625542 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625765 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625788 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625804 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnsv\" (UniqueName: \"kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625840 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625855 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5sbn\" (UniqueName: \"kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.625877 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.626332 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.626402 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.126387494 +0000 UTC m=+150.031433438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.626606 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.708480 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.709414 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.726735 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.726776 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.726798 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.726812 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fnsv\" (UniqueName: \"kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.727314 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.227303157 +0000 UTC m=+150.132349101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.727742 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.728152 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.742876 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5sbn\" (UniqueName: \"kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn\") pod \"community-operators-jn2vp\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.834284 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.834777 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dzms\" (UniqueName: \"kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.834829 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.834844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.834967 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.334951447 +0000 UTC m=+150.239997391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.861023 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.887144 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fnsv\" (UniqueName: \"kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv\") pod \"certified-operators-dpfbb\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.927401 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.931208 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.936738 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.936794 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dzms\" (UniqueName: \"kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.936845 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.936862 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.937274 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: E0930 17:05:53.937537 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.437522951 +0000 UTC m=+150.342568895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.938591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:53 crc kubenswrapper[4821]: I0930 17:05:53.979350 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.038579 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.038988 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.538967686 +0000 UTC m=+150.444013630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.068933 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dzms\" (UniqueName: \"kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms\") pod \"community-operators-ld6kq\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.104024 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.112076 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:54 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:54 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:54 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.112155 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.140746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.141221 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.641206931 +0000 UTC m=+150.546252875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.194967 4821 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rszth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.195019 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" podUID="3b470461-fc34-4822-a4e6-cba4c761712c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.229809 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"817d2bf67600219dadd8bd41a08f9ba1a66cec03736e13f1237e51f63b53995b"} Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.242542 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.242876 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.742859993 +0000 UTC m=+150.647905937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.328046 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.344737 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.346044 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.846029892 +0000 UTC m=+150.751075836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.445757 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.446039 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:54.946024841 +0000 UTC m=+150.851070785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.551285 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.551662 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.051643901 +0000 UTC m=+150.956689845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.654238 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.654377 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.154352538 +0000 UTC m=+151.059398482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.654503 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.654794 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.154782179 +0000 UTC m=+151.059828123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.757896 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.773272 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.273248029 +0000 UTC m=+151.178293973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.874813 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.875210 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.375198567 +0000 UTC m=+151.280244511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:54 crc kubenswrapper[4821]: I0930 17:05:54.977428 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:54 crc kubenswrapper[4821]: E0930 17:05:54.977774 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.47775952 +0000 UTC m=+151.382805464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.079167 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.080945 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.580928999 +0000 UTC m=+151.485974943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.107907 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:55 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:55 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:55 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.109361 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.173472 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.174151 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.177833 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.178003 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.187238 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.187518 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.687503973 +0000 UTC m=+151.592549917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.217770 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.289850 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.289901 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.290107 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.290504 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.790488786 +0000 UTC m=+151.695534730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.304350 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b2c4caa2ceca0cd4cbcbe46410b5791f1f631b262fa6ed2ab661f7860dcfe542"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.304396 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e04f03cb474382930fdd38286e1be2bc781179653321333e5bd2bc950906a6e5"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.305292 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.331198 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"89e6be1acc99cd6d4ac8c93dafed130c47c96c7febf7a68613b9ff64817d3874"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.336281 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" event={"ID":"7110b719-7cb2-4f82-a854-de726147673c","Type":"ContainerStarted","Data":"7067003142a2062bc8f5a2a3ac15b9a4b876d0e939fb023ca116cc4fee686cab"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.346895 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f3420a63f724364ae6c66b01cf3776cb013147f941dcfe2056aba5043030831b"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.346933 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"736151df468983ae0718db25e510aeb838f5557c8c9f1bdf174d501064b56a5e"} Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.393443 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.393613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.393650 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.394240 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.394255 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.894231069 +0000 UTC m=+151.799277013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.444408 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.468620 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.471364 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.472418 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.489981 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.494992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.497165 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:55.997153052 +0000 UTC m=+151.902198996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.507394 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.508133 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.516683 4821 patch_prober.go:28] interesting pod/console-f9d7485db-lzvgr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.516734 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lzvgr" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.520171 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.557763 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.581283 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.599199 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.599645 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.099620293 +0000 UTC m=+152.004666227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.603453 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.603569 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zsmc\" (UniqueName: \"kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.603602 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.603712 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.604118 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.104101495 +0000 UTC m=+152.009147439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.704911 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.705651 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zsmc\" (UniqueName: \"kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.705683 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.205652544 +0000 UTC m=+152.110698488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.705936 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.706056 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.725177 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.730596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.791361 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zsmc\" (UniqueName: \"kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc\") pod \"redhat-marketplace-fvjkc\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.808317 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.808605 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.308592586 +0000 UTC m=+152.213638530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.850874 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.897516 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.899308 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.900519 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.914344 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:55 crc kubenswrapper[4821]: E0930 17:05:55.914729 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.414711988 +0000 UTC m=+152.319757932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.934859 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.934923 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.939628 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.939759 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:05:55 crc kubenswrapper[4821]: I0930 17:05:55.977194 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.019877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skshp\" (UniqueName: \"kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.019968 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.019995 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.020059 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.020338 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.520327049 +0000 UTC m=+152.425372983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.027717 4821 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.067220 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:05:56 crc kubenswrapper[4821]: W0930 17:05:56.087849 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874864e1_57fd_4885_9714_cf9e3365b5c0.slice/crio-3fa56ec4f47839ab5a85f2da6277ac6678175039a555dc081dbb52b2bcd129e0 WatchSource:0}: Error finding container 3fa56ec4f47839ab5a85f2da6277ac6678175039a555dc081dbb52b2bcd129e0: Status 404 returned error can't find the container with id 3fa56ec4f47839ab5a85f2da6277ac6678175039a555dc081dbb52b2bcd129e0 Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.111322 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:56 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:56 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:56 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.111394 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.123030 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.123253 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.123279 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.123336 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skshp\" (UniqueName: \"kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.123675 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.62365865 +0000 UTC m=+152.528704604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.124002 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.124265 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.158258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skshp\" (UniqueName: \"kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp\") pod \"redhat-marketplace-6rst9\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.218531 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.227558 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.227992 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.727977398 +0000 UTC m=+152.633023512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.252442 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.328511 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.329381 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.829361472 +0000 UTC m=+152.734407406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.351537 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.352745 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.361766 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerStarted","Data":"3fa56ec4f47839ab5a85f2da6277ac6678175039a555dc081dbb52b2bcd129e0"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.372102 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.372836 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" event={"ID":"7110b719-7cb2-4f82-a854-de726147673c","Type":"ContainerStarted","Data":"fc781b046709472fb96d12726cb0be1c9207586011466b91a18e35c33cf13608"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.378225 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerStarted","Data":"d955432d1b32ced576740354ad3efb2869f542a45c1b43cab9375099cdd68eca"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.399422 4821 generic.go:334] "Generic (PLEG): container finished" podID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerID="afb832c8e385a199de9234050582200f5d2b6ced531f3abfba198e92c9ab6b30" exitCode=0 Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.399493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerDied","Data":"afb832c8e385a199de9234050582200f5d2b6ced531f3abfba198e92c9ab6b30"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.399517 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerStarted","Data":"3373a4a5b492b5df76b5d7dd0a26b89b357e0c5c1dc012f35b82f745363b7f5a"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.404679 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.419204 4821 generic.go:334] "Generic (PLEG): container finished" podID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerID="7e86de19397572a2a4a20fb54f11e2b1d0a44fb62f34a13a558275d1ae811d62" exitCode=0 Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.419543 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerDied","Data":"7e86de19397572a2a4a20fb54f11e2b1d0a44fb62f34a13a558275d1ae811d62"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.419619 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerStarted","Data":"bcfa889f70f001330c0ef8e77cd0b3bfa551be79e028ea5f232a78f9fe8e8b62"} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.423328 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.432389 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.432716 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:56.932695925 +0000 UTC m=+152.837741869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.453582 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.453615 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.472540 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.527976 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.536051 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.536437 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n7pt\" (UniqueName: \"kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.536514 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.536560 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.536660 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:57.036643893 +0000 UTC m=+152.941689837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.540143 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.549498 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k25bw" podStartSLOduration=13.549475952 podStartE2EDuration="13.549475952s" podCreationTimestamp="2025-09-30 17:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:56.526234794 +0000 UTC m=+152.431280738" watchObservedRunningTime="2025-09-30 17:05:56.549475952 +0000 UTC m=+152.454521896" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.566390 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.585684 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:05:56 crc kubenswrapper[4821]: W0930 17:05:56.602242 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5ab0936e_6bd0_4587_b7e1_76575578eec9.slice/crio-76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886 WatchSource:0}: Error finding container 76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886: Status 404 returned error can't find the container with id 76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886 Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.634340 4821 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9t59s container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.634444 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" podUID="a617aa70-de94-4903-863e-0b10a2c9253d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.641995 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.642562 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.642705 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.642817 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n7pt\" (UniqueName: \"kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.642981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.643123 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.643299 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lb4\" (UniqueName: \"kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.643750 4821 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9t59s container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.644657 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" podUID="a617aa70-de94-4903-863e-0b10a2c9253d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.646246 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.646482 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.647231 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:57.147216806 +0000 UTC m=+153.052262750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.660171 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.660221 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.699376 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n7pt\" (UniqueName: \"kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt\") pod \"redhat-operators-dn2dm\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.701428 4821 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jsll6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]log ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]etcd ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/start-apiserver-admission-initializer ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/generic-apiserver-start-informers ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/max-in-flight-filter ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/storage-object-count-tracker-hook ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/image.openshift.io-apiserver-caches ok Sep 30 17:05:56 crc kubenswrapper[4821]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Sep 30 17:05:56 crc kubenswrapper[4821]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/project.openshift.io-projectcache ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Sep 30 17:05:56 crc kubenswrapper[4821]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/openshift.io-restmapperupdater ok Sep 30 17:05:56 crc kubenswrapper[4821]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Sep 30 17:05:56 crc kubenswrapper[4821]: livez check failed Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.701541 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" podUID="699c6b1a-b6dc-4ba8-8aba-9757f0c35c9a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.744704 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.745193 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.745301 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.745440 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lb4\" (UniqueName: \"kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.745804 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:05:57.245789751 +0000 UTC m=+153.150835695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.746223 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.746518 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.821782 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lb4\" (UniqueName: \"kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4\") pod \"redhat-operators-7dvln\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.851706 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:56 crc kubenswrapper[4821]: E0930 17:05:56.852653 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:05:57.352641851 +0000 UTC m=+153.257687795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6l6qm" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.859496 4821 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-09-30T17:05:56.027739693Z","Handler":null,"Name":""} Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.889059 4821 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.889952 4821 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.905356 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.969773 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:05:56 crc kubenswrapper[4821]: I0930 17:05:56.997008 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.091484 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.103893 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.119989 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:57 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:57 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:57 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.120041 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.124532 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rszth" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.141160 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.181382 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.222017 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.222072 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.318568 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.330278 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.452376 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5ab0936e-6bd0-4587-b7e1-76575578eec9","Type":"ContainerStarted","Data":"bf656b64ed6766a40c212feed89bdd40055fb18b430574cd2de5ba1cf7a097bd"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.452739 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5ab0936e-6bd0-4587-b7e1-76575578eec9","Type":"ContainerStarted","Data":"76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.468424 4821 generic.go:334] "Generic (PLEG): container finished" podID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerID="93e5b9a2bf59d5fae2b209f03d2c6d3b4e25edbf8207265bcdae68b1a74a87fe" exitCode=0 Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.468500 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerDied","Data":"93e5b9a2bf59d5fae2b209f03d2c6d3b4e25edbf8207265bcdae68b1a74a87fe"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.476020 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerStarted","Data":"8770e9e9cf8a7713d879e45b973ddff0fe4ec13b86adffd31aecfe3ea8c5c00d"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.488320 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerStarted","Data":"7d7c4e58c949ec2c221c39406ff01e2bcffbf5412bc867e0b87f4b2592596a03"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.494206 4821 generic.go:334] "Generic (PLEG): container finished" podID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerID="c98baffea75e9318c46095441b72f6888301c77c15ba279c130329293b814ce1" exitCode=0 Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.495549 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerDied","Data":"c98baffea75e9318c46095441b72f6888301c77c15ba279c130329293b814ce1"} Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.517198 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.517178357 podStartE2EDuration="2.517178357s" podCreationTimestamp="2025-09-30 17:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:57.473102239 +0000 UTC m=+153.378148183" watchObservedRunningTime="2025-09-30 17:05:57.517178357 +0000 UTC m=+153.422224301" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.521571 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lcmls" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.597370 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6l6qm\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.680995 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.681721 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.687196 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.697647 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.697917 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.734578 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.786364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.791982 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.799108 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.799441 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.800130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:57 crc kubenswrapper[4821]: I0930 17:05:57.842923 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.068964 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.080926 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.106556 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:58 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:58 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:58 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.106818 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:58 crc kubenswrapper[4821]: W0930 17:05:58.143198 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d7a822d_04df_4ce0_bc18_15bc2195f18e.slice/crio-f5dc18a04bc4db03eb2b88cb9bff2cc81aeb453318bfeaca62bfde1269aecbc4 WatchSource:0}: Error finding container f5dc18a04bc4db03eb2b88cb9bff2cc81aeb453318bfeaca62bfde1269aecbc4: Status 404 returned error can't find the container with id f5dc18a04bc4db03eb2b88cb9bff2cc81aeb453318bfeaca62bfde1269aecbc4 Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.427715 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:05:58 crc kubenswrapper[4821]: W0930 17:05:58.463887 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a04dbc2_4ac0_4926_96a5_aa5d11cf3380.slice/crio-f56dc158676248c646e2507420976631cb75ad6ed8bd8018698cfb073229da8d WatchSource:0}: Error finding container f56dc158676248c646e2507420976631cb75ad6ed8bd8018698cfb073229da8d: Status 404 returned error can't find the container with id f56dc158676248c646e2507420976631cb75ad6ed8bd8018698cfb073229da8d Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.516125 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerStarted","Data":"f5dc18a04bc4db03eb2b88cb9bff2cc81aeb453318bfeaca62bfde1269aecbc4"} Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.518192 4821 generic.go:334] "Generic (PLEG): container finished" podID="d457d791-8e73-44b2-9292-33188e42571f" containerID="cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83" exitCode=0 Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.518234 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerDied","Data":"cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83"} Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.560351 4821 generic.go:334] "Generic (PLEG): container finished" podID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerID="10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6" exitCode=0 Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.560436 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerDied","Data":"10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6"} Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.569761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerStarted","Data":"f56dc158676248c646e2507420976631cb75ad6ed8bd8018698cfb073229da8d"} Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.628491 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.642925 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9t59s" Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.714508 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Sep 30 17:05:58 crc kubenswrapper[4821]: I0930 17:05:58.791927 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.106573 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:05:59 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:05:59 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:05:59 crc kubenswrapper[4821]: healthz check failed Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.106848 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.577620 4821 generic.go:334] "Generic (PLEG): container finished" podID="56fb6d19-7b78-4122-9989-0676a86c33dd" containerID="c90ccf86686cd276de0d6201e7afea24a7f6056f7c35f9fab5e19810fa5e7ff4" exitCode=0 Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.577696 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" event={"ID":"56fb6d19-7b78-4122-9989-0676a86c33dd","Type":"ContainerDied","Data":"c90ccf86686cd276de0d6201e7afea24a7f6056f7c35f9fab5e19810fa5e7ff4"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.590012 4821 generic.go:334] "Generic (PLEG): container finished" podID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerID="a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93" exitCode=0 Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.590689 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerDied","Data":"a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.598140 4821 generic.go:334] "Generic (PLEG): container finished" podID="5ab0936e-6bd0-4587-b7e1-76575578eec9" containerID="bf656b64ed6766a40c212feed89bdd40055fb18b430574cd2de5ba1cf7a097bd" exitCode=0 Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.598214 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5ab0936e-6bd0-4587-b7e1-76575578eec9","Type":"ContainerDied","Data":"bf656b64ed6766a40c212feed89bdd40055fb18b430574cd2de5ba1cf7a097bd"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.604996 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" event={"ID":"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d","Type":"ContainerStarted","Data":"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.605028 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" event={"ID":"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d","Type":"ContainerStarted","Data":"8e50225478ecaf6dd9e7e938eabd178f08691129c002b54616e05b10e7243072"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.605040 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.607941 4821 generic.go:334] "Generic (PLEG): container finished" podID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerID="a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284" exitCode=0 Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.607982 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerDied","Data":"a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.620673 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b","Type":"ContainerStarted","Data":"bcdf018ac280e166f67bf47208ccf66c6cccf66f95137373b02a0b7a499c205f"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.620711 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b","Type":"ContainerStarted","Data":"c27e33385411476ad9950077f213ebfcc855c777baa6c318dd09c3cd5cbee5c9"} Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.690550 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" podStartSLOduration=133.690529179 podStartE2EDuration="2m13.690529179s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:59.684730775 +0000 UTC m=+155.589776729" watchObservedRunningTime="2025-09-30 17:05:59.690529179 +0000 UTC m=+155.595575123" Sep 30 17:05:59 crc kubenswrapper[4821]: I0930 17:05:59.722880 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.722858624 podStartE2EDuration="2.722858624s" podCreationTimestamp="2025-09-30 17:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:05:59.717754976 +0000 UTC m=+155.622800910" watchObservedRunningTime="2025-09-30 17:05:59.722858624 +0000 UTC m=+155.627904568" Sep 30 17:06:00 crc kubenswrapper[4821]: I0930 17:06:00.104969 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:00 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:00 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:00 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:00 crc kubenswrapper[4821]: I0930 17:06:00.105600 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:00 crc kubenswrapper[4821]: I0930 17:06:00.662889 4821 generic.go:334] "Generic (PLEG): container finished" podID="35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" containerID="bcdf018ac280e166f67bf47208ccf66c6cccf66f95137373b02a0b7a499c205f" exitCode=0 Sep 30 17:06:00 crc kubenswrapper[4821]: I0930 17:06:00.663013 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b","Type":"ContainerDied","Data":"bcdf018ac280e166f67bf47208ccf66c6cccf66f95137373b02a0b7a499c205f"} Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.064179 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.104594 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:01 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:01 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:01 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.104642 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.169159 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume\") pod \"56fb6d19-7b78-4122-9989-0676a86c33dd\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.169272 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume\") pod \"56fb6d19-7b78-4122-9989-0676a86c33dd\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.169349 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs7qb\" (UniqueName: \"kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb\") pod \"56fb6d19-7b78-4122-9989-0676a86c33dd\" (UID: \"56fb6d19-7b78-4122-9989-0676a86c33dd\") " Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.169802 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "56fb6d19-7b78-4122-9989-0676a86c33dd" (UID: "56fb6d19-7b78-4122-9989-0676a86c33dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.183045 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "56fb6d19-7b78-4122-9989-0676a86c33dd" (UID: "56fb6d19-7b78-4122-9989-0676a86c33dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.190517 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb" (OuterVolumeSpecName: "kube-api-access-xs7qb") pod "56fb6d19-7b78-4122-9989-0676a86c33dd" (UID: "56fb6d19-7b78-4122-9989-0676a86c33dd"). InnerVolumeSpecName "kube-api-access-xs7qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.230825 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.270261 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs7qb\" (UniqueName: \"kubernetes.io/projected/56fb6d19-7b78-4122-9989-0676a86c33dd-kube-api-access-xs7qb\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.270289 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56fb6d19-7b78-4122-9989-0676a86c33dd-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.270298 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56fb6d19-7b78-4122-9989-0676a86c33dd-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.371351 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access\") pod \"5ab0936e-6bd0-4587-b7e1-76575578eec9\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.371406 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir\") pod \"5ab0936e-6bd0-4587-b7e1-76575578eec9\" (UID: \"5ab0936e-6bd0-4587-b7e1-76575578eec9\") " Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.371803 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ab0936e-6bd0-4587-b7e1-76575578eec9" (UID: "5ab0936e-6bd0-4587-b7e1-76575578eec9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.375060 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ab0936e-6bd0-4587-b7e1-76575578eec9" (UID: "5ab0936e-6bd0-4587-b7e1-76575578eec9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.473127 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ab0936e-6bd0-4587-b7e1-76575578eec9-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.473159 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ab0936e-6bd0-4587-b7e1-76575578eec9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.663198 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.669178 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jsll6" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.763463 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" event={"ID":"56fb6d19-7b78-4122-9989-0676a86c33dd","Type":"ContainerDied","Data":"ace4499a96e50b77e821caea7d1d7f9e1ee56b0f0342bc76f55ae98eecaf7be9"} Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.763503 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace4499a96e50b77e821caea7d1d7f9e1ee56b0f0342bc76f55ae98eecaf7be9" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.763586 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.790828 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.790880 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5ab0936e-6bd0-4587-b7e1-76575578eec9","Type":"ContainerDied","Data":"76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886"} Sep 30 17:06:01 crc kubenswrapper[4821]: I0930 17:06:01.790916 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d95926858a87981a246ed84fec1de08e6f2e8195dee8a7c221706f0b2ce886" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.106372 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.110618 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:02 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:02 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:02 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.110661 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.134956 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4c49j" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.315696 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir\") pod \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.315853 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access\") pod \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\" (UID: \"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b\") " Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.315875 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" (UID: "35656fe4-aab7-4d1e-b515-b33c8eb7ae8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.316198 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.338755 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" (UID: "35656fe4-aab7-4d1e-b515-b33c8eb7ae8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.417712 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35656fe4-aab7-4d1e-b515-b33c8eb7ae8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.826881 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.828324 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"35656fe4-aab7-4d1e-b515-b33c8eb7ae8b","Type":"ContainerDied","Data":"c27e33385411476ad9950077f213ebfcc855c777baa6c318dd09c3cd5cbee5c9"} Sep 30 17:06:02 crc kubenswrapper[4821]: I0930 17:06:02.828345 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27e33385411476ad9950077f213ebfcc855c777baa6c318dd09c3cd5cbee5c9" Sep 30 17:06:03 crc kubenswrapper[4821]: I0930 17:06:03.106872 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:03 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:03 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:03 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:03 crc kubenswrapper[4821]: I0930 17:06:03.106939 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:04 crc kubenswrapper[4821]: I0930 17:06:04.172058 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:04 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:04 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:04 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:04 crc kubenswrapper[4821]: I0930 17:06:04.172161 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.104852 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:05 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:05 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:05 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.105817 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.507275 4821 patch_prober.go:28] interesting pod/console-f9d7485db-lzvgr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.507348 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lzvgr" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.930599 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.930657 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.931956 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-4fxjh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Sep 30 17:06:05 crc kubenswrapper[4821]: I0930 17:06:05.931980 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4fxjh" podUID="f0fb9646-336c-4014-92ca-bb5caa55dde5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Sep 30 17:06:06 crc kubenswrapper[4821]: I0930 17:06:06.107563 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:06 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:06 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:06 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:06 crc kubenswrapper[4821]: I0930 17:06:06.107628 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:07 crc kubenswrapper[4821]: I0930 17:06:07.105938 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:07 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:07 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:07 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:07 crc kubenswrapper[4821]: I0930 17:06:07.106005 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:08 crc kubenswrapper[4821]: I0930 17:06:08.105763 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:08 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:08 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:08 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:08 crc kubenswrapper[4821]: I0930 17:06:08.106112 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:08 crc kubenswrapper[4821]: I0930 17:06:08.757668 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:06:08 crc kubenswrapper[4821]: I0930 17:06:08.776991 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc-metrics-certs\") pod \"network-metrics-daemon-zkvtw\" (UID: \"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc\") " pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:06:08 crc kubenswrapper[4821]: I0930 17:06:08.931437 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkvtw" Sep 30 17:06:09 crc kubenswrapper[4821]: I0930 17:06:09.104466 4821 patch_prober.go:28] interesting pod/router-default-5444994796-qgksb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:06:09 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Sep 30 17:06:09 crc kubenswrapper[4821]: [+]process-running ok Sep 30 17:06:09 crc kubenswrapper[4821]: healthz check failed Sep 30 17:06:09 crc kubenswrapper[4821]: I0930 17:06:09.104527 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgksb" podUID="140e919b-2356-4ff3-a604-76b6320ee714" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:06:10 crc kubenswrapper[4821]: I0930 17:06:10.105530 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:06:10 crc kubenswrapper[4821]: I0930 17:06:10.108743 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qgksb" Sep 30 17:06:15 crc kubenswrapper[4821]: I0930 17:06:15.517329 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:06:15 crc kubenswrapper[4821]: I0930 17:06:15.525630 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:06:15 crc kubenswrapper[4821]: I0930 17:06:15.913133 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zkvtw"] Sep 30 17:06:15 crc kubenswrapper[4821]: I0930 17:06:15.935167 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4fxjh" Sep 30 17:06:17 crc kubenswrapper[4821]: I0930 17:06:17.800814 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:06:19 crc kubenswrapper[4821]: I0930 17:06:19.350148 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:06:19 crc kubenswrapper[4821]: I0930 17:06:19.350194 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:06:25 crc kubenswrapper[4821]: I0930 17:06:25.992285 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" event={"ID":"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc","Type":"ContainerStarted","Data":"38651d3cccd3c80e938d2c5d3808d8853ccabf995c9ac7584aca283790efce2a"} Sep 30 17:06:26 crc kubenswrapper[4821]: E0930 17:06:26.449158 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 17:06:26 crc kubenswrapper[4821]: E0930 17:06:26.449634 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6n7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dn2dm_openshift-marketplace(7d7a822d-04df-4ce0-bc18-15bc2195f18e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:26 crc kubenswrapper[4821]: E0930 17:06:26.451376 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dn2dm" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" Sep 30 17:06:27 crc kubenswrapper[4821]: I0930 17:06:27.123109 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4qvdx" Sep 30 17:06:29 crc kubenswrapper[4821]: E0930 17:06:29.051158 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dn2dm" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" Sep 30 17:06:29 crc kubenswrapper[4821]: E0930 17:06:29.139998 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 30 17:06:29 crc kubenswrapper[4821]: E0930 17:06:29.140529 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fnsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dpfbb_openshift-marketplace(63e05203-88cc-4a5f-8b18-eb990b5a6ca0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:29 crc kubenswrapper[4821]: E0930 17:06:29.142040 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dpfbb" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.471938 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dpfbb" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.554197 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.554617 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dzms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ld6kq_openshift-marketplace(874864e1-57fd-4885-9714-cf9e3365b5c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.555749 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ld6kq" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.572893 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.573306 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tf5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4bqx8_openshift-marketplace(5aa5939d-5ace-49e7-a2ba-b028cf241b02): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:30 crc kubenswrapper[4821]: E0930 17:06:30.574502 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4bqx8" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.349364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4bqx8" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.350253 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ld6kq" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.434517 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.434708 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fvjkc_openshift-marketplace(c07e1e4d-c8fa-48d6-a138-3c42ccf2e368): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.436725 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fvjkc" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.457259 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.457412 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5sbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jn2vp_openshift-marketplace(7700fbde-8552-4aa1-b6e9-910bf3a45207): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.458637 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jn2vp" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.493876 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.494056 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skshp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6rst9_openshift-marketplace(d457d791-8e73-44b2-9292-33188e42571f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.494860 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.494961 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9lb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7dvln_openshift-marketplace(4a04dbc2-4ac0-4926-96a5-aa5d11cf3380): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.495970 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6rst9" podUID="d457d791-8e73-44b2-9292-33188e42571f" Sep 30 17:06:31 crc kubenswrapper[4821]: E0930 17:06:31.496028 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7dvln" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" Sep 30 17:06:32 crc kubenswrapper[4821]: I0930 17:06:32.024190 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" event={"ID":"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc","Type":"ContainerStarted","Data":"6ba5edc71b52e61485057792267685a199d70a0c05b50ae15236f1d91b0bc5db"} Sep 30 17:06:32 crc kubenswrapper[4821]: I0930 17:06:32.024485 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkvtw" event={"ID":"3a9a8df1-3e88-4b01-9993-d1bbd4b6f4bc","Type":"ContainerStarted","Data":"33d870fc1159932bdcf7b38e6ef1ee44590f10ce716e71c02cd386c3fcf5cc0b"} Sep 30 17:06:32 crc kubenswrapper[4821]: E0930 17:06:32.026022 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6rst9" podUID="d457d791-8e73-44b2-9292-33188e42571f" Sep 30 17:06:32 crc kubenswrapper[4821]: E0930 17:06:32.026203 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn2vp" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" Sep 30 17:06:32 crc kubenswrapper[4821]: E0930 17:06:32.026277 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fvjkc" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" Sep 30 17:06:32 crc kubenswrapper[4821]: E0930 17:06:32.026306 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7dvln" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" Sep 30 17:06:32 crc kubenswrapper[4821]: I0930 17:06:32.045440 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zkvtw" podStartSLOduration=166.045420917 podStartE2EDuration="2m46.045420917s" podCreationTimestamp="2025-09-30 17:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:06:32.043827357 +0000 UTC m=+187.948873301" watchObservedRunningTime="2025-09-30 17:06:32.045420917 +0000 UTC m=+187.950466871" Sep 30 17:06:32 crc kubenswrapper[4821]: I0930 17:06:32.836528 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:06:43 crc kubenswrapper[4821]: I0930 17:06:43.076740 4821 generic.go:334] "Generic (PLEG): container finished" podID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerID="6e604cc38e886f81c578f15fb5fe3a920538f2162ae12bd69e5667a2d5bd1163" exitCode=0 Sep 30 17:06:43 crc kubenswrapper[4821]: I0930 17:06:43.076803 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerDied","Data":"6e604cc38e886f81c578f15fb5fe3a920538f2162ae12bd69e5667a2d5bd1163"} Sep 30 17:06:43 crc kubenswrapper[4821]: I0930 17:06:43.084972 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerStarted","Data":"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d"} Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.091814 4821 generic.go:334] "Generic (PLEG): container finished" podID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerID="5094e369a588736b2b78fd93dda9496c7e8ad65a5335aec91a92a918d9ddb136" exitCode=0 Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.091882 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerDied","Data":"5094e369a588736b2b78fd93dda9496c7e8ad65a5335aec91a92a918d9ddb136"} Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.094889 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerStarted","Data":"8a586e974a21af66cad4fa5d4b5c08c519c8449b8c97e65b481c18d6e8166e86"} Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.099915 4821 generic.go:334] "Generic (PLEG): container finished" podID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerID="fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d" exitCode=0 Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.100242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerDied","Data":"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d"} Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.104202 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.104276 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:44 crc kubenswrapper[4821]: I0930 17:06:44.150922 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpfbb" podStartSLOduration=5.043224523 podStartE2EDuration="51.150898877s" podCreationTimestamp="2025-09-30 17:05:53 +0000 UTC" firstStartedPulling="2025-09-30 17:05:57.473396896 +0000 UTC m=+153.378442840" lastFinishedPulling="2025-09-30 17:06:43.58107125 +0000 UTC m=+199.486117194" observedRunningTime="2025-09-30 17:06:44.148739051 +0000 UTC m=+200.053784985" watchObservedRunningTime="2025-09-30 17:06:44.150898877 +0000 UTC m=+200.055944821" Sep 30 17:06:45 crc kubenswrapper[4821]: I0930 17:06:45.107788 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerStarted","Data":"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7"} Sep 30 17:06:45 crc kubenswrapper[4821]: I0930 17:06:45.228780 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dpfbb" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="registry-server" probeResult="failure" output=< Sep 30 17:06:45 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Sep 30 17:06:45 crc kubenswrapper[4821]: > Sep 30 17:06:45 crc kubenswrapper[4821]: I0930 17:06:45.734924 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dn2dm" podStartSLOduration=4.70842113 podStartE2EDuration="49.734907824s" podCreationTimestamp="2025-09-30 17:05:56 +0000 UTC" firstStartedPulling="2025-09-30 17:05:59.610153287 +0000 UTC m=+155.515199231" lastFinishedPulling="2025-09-30 17:06:44.636639981 +0000 UTC m=+200.541685925" observedRunningTime="2025-09-30 17:06:45.130721499 +0000 UTC m=+201.035767443" watchObservedRunningTime="2025-09-30 17:06:45.734907824 +0000 UTC m=+201.639953768" Sep 30 17:06:46 crc kubenswrapper[4821]: I0930 17:06:46.113927 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerStarted","Data":"16da39e29f9bcdc4a1e85309b9f2f83d4be7aaf7a1814e7ca41cb236ac346b6e"} Sep 30 17:06:46 crc kubenswrapper[4821]: I0930 17:06:46.132533 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bqx8" podStartSLOduration=3.949833351 podStartE2EDuration="53.132513373s" podCreationTimestamp="2025-09-30 17:05:53 +0000 UTC" firstStartedPulling="2025-09-30 17:05:56.421601519 +0000 UTC m=+152.326647453" lastFinishedPulling="2025-09-30 17:06:45.604281531 +0000 UTC m=+201.509327475" observedRunningTime="2025-09-30 17:06:46.130195984 +0000 UTC m=+202.035241938" watchObservedRunningTime="2025-09-30 17:06:46.132513373 +0000 UTC m=+202.037559317" Sep 30 17:06:46 crc kubenswrapper[4821]: I0930 17:06:46.997973 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:06:46 crc kubenswrapper[4821]: I0930 17:06:46.998626 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:06:47 crc kubenswrapper[4821]: I0930 17:06:47.119845 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerStarted","Data":"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9"} Sep 30 17:06:47 crc kubenswrapper[4821]: I0930 17:06:47.122854 4821 generic.go:334] "Generic (PLEG): container finished" podID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerID="ee61d309a86d52d116a5fe3a41a93205a0a75aeb85ff311610cec02545c55f42" exitCode=0 Sep 30 17:06:47 crc kubenswrapper[4821]: I0930 17:06:47.122929 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerDied","Data":"ee61d309a86d52d116a5fe3a41a93205a0a75aeb85ff311610cec02545c55f42"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.044825 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dn2dm" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="registry-server" probeResult="failure" output=< Sep 30 17:06:48 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Sep 30 17:06:48 crc kubenswrapper[4821]: > Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.128895 4821 generic.go:334] "Generic (PLEG): container finished" podID="d457d791-8e73-44b2-9292-33188e42571f" containerID="de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc" exitCode=0 Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.128957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerDied","Data":"de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.134813 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerStarted","Data":"d85d2ec0cda6163d6b9fec85bf88a11951bea26ebdb415c165cfb3c0fe47ce6d"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.137973 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerStarted","Data":"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.139981 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerStarted","Data":"426593123a473bcd42a49e427e0860d80065b84f0e689c82ed05de8c23842442"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.142725 4821 generic.go:334] "Generic (PLEG): container finished" podID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerID="40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9" exitCode=0 Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.142752 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerDied","Data":"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9"} Sep 30 17:06:48 crc kubenswrapper[4821]: I0930 17:06:48.166792 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jn2vp" podStartSLOduration=4.019580779 podStartE2EDuration="55.166777036s" podCreationTimestamp="2025-09-30 17:05:53 +0000 UTC" firstStartedPulling="2025-09-30 17:05:56.40438171 +0000 UTC m=+152.309427654" lastFinishedPulling="2025-09-30 17:06:47.551577967 +0000 UTC m=+203.456623911" observedRunningTime="2025-09-30 17:06:48.163049361 +0000 UTC m=+204.068095305" watchObservedRunningTime="2025-09-30 17:06:48.166777036 +0000 UTC m=+204.071822980" Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.150018 4821 generic.go:334] "Generic (PLEG): container finished" podID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerID="c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4" exitCode=0 Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.150109 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerDied","Data":"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4"} Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.152576 4821 generic.go:334] "Generic (PLEG): container finished" podID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerID="426593123a473bcd42a49e427e0860d80065b84f0e689c82ed05de8c23842442" exitCode=0 Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.152613 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerDied","Data":"426593123a473bcd42a49e427e0860d80065b84f0e689c82ed05de8c23842442"} Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.349828 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.349891 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.349933 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.350474 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:06:49 crc kubenswrapper[4821]: I0930 17:06:49.350573 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096" gracePeriod=600 Sep 30 17:06:50 crc kubenswrapper[4821]: I0930 17:06:50.159266 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096" exitCode=0 Sep 30 17:06:50 crc kubenswrapper[4821]: I0930 17:06:50.159316 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.165580 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerStarted","Data":"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.167520 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.169460 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerStarted","Data":"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.171402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerStarted","Data":"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.172968 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerStarted","Data":"f07c2374abeaefe1af2bf5af5abd30f83032397348fd0d711cf3ce18bfb48bcf"} Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.193427 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rst9" podStartSLOduration=4.24576872 podStartE2EDuration="56.193407213s" podCreationTimestamp="2025-09-30 17:05:55 +0000 UTC" firstStartedPulling="2025-09-30 17:05:58.523273356 +0000 UTC m=+154.428319300" lastFinishedPulling="2025-09-30 17:06:50.470911859 +0000 UTC m=+206.375957793" observedRunningTime="2025-09-30 17:06:51.189871422 +0000 UTC m=+207.094917366" watchObservedRunningTime="2025-09-30 17:06:51.193407213 +0000 UTC m=+207.098453157" Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.271585 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dvln" podStartSLOduration=4.332892375 podStartE2EDuration="55.271567314s" podCreationTimestamp="2025-09-30 17:05:56 +0000 UTC" firstStartedPulling="2025-09-30 17:05:59.601575844 +0000 UTC m=+155.506621778" lastFinishedPulling="2025-09-30 17:06:50.540250773 +0000 UTC m=+206.445296717" observedRunningTime="2025-09-30 17:06:51.268800113 +0000 UTC m=+207.173846057" watchObservedRunningTime="2025-09-30 17:06:51.271567314 +0000 UTC m=+207.176613268" Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.300096 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvjkc" podStartSLOduration=4.35047158 podStartE2EDuration="56.300071383s" podCreationTimestamp="2025-09-30 17:05:55 +0000 UTC" firstStartedPulling="2025-09-30 17:05:58.568330808 +0000 UTC m=+154.473376752" lastFinishedPulling="2025-09-30 17:06:50.517930611 +0000 UTC m=+206.422976555" observedRunningTime="2025-09-30 17:06:51.296568684 +0000 UTC m=+207.201614628" watchObservedRunningTime="2025-09-30 17:06:51.300071383 +0000 UTC m=+207.205117327" Sep 30 17:06:51 crc kubenswrapper[4821]: I0930 17:06:51.318914 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ld6kq" podStartSLOduration=5.175069941 podStartE2EDuration="58.318896305s" podCreationTimestamp="2025-09-30 17:05:53 +0000 UTC" firstStartedPulling="2025-09-30 17:05:57.498187774 +0000 UTC m=+153.403233718" lastFinishedPulling="2025-09-30 17:06:50.642014148 +0000 UTC m=+206.547060082" observedRunningTime="2025-09-30 17:06:51.317759256 +0000 UTC m=+207.222805200" watchObservedRunningTime="2025-09-30 17:06:51.318896305 +0000 UTC m=+207.223942249" Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.105630 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.934137 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.936089 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.979997 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.980149 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:06:53 crc kubenswrapper[4821]: I0930 17:06:53.980162 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.027454 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.146441 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.186460 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.226184 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.235587 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.329295 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.329597 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:06:54 crc kubenswrapper[4821]: I0930 17:06:54.366610 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:06:55 crc kubenswrapper[4821]: I0930 17:06:55.241325 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:06:55 crc kubenswrapper[4821]: I0930 17:06:55.898313 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:06:55 crc kubenswrapper[4821]: I0930 17:06:55.898944 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:06:55 crc kubenswrapper[4821]: I0930 17:06:55.947487 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.253593 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.254500 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.258619 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.301931 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.905780 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.905818 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:06:56 crc kubenswrapper[4821]: I0930 17:06:56.944774 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.030629 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.079777 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.241326 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.243974 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.345968 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:06:57 crc kubenswrapper[4821]: I0930 17:06:57.346241 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dpfbb" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="registry-server" containerID="cri-o://8a586e974a21af66cad4fa5d4b5c08c519c8449b8c97e65b481c18d6e8166e86" gracePeriod=2 Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.232123 4821 generic.go:334] "Generic (PLEG): container finished" podID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerID="8a586e974a21af66cad4fa5d4b5c08c519c8449b8c97e65b481c18d6e8166e86" exitCode=0 Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.233505 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerDied","Data":"8a586e974a21af66cad4fa5d4b5c08c519c8449b8c97e65b481c18d6e8166e86"} Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.347811 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.421293 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content\") pod \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.421377 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities\") pod \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.421422 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fnsv\" (UniqueName: \"kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv\") pod \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\" (UID: \"63e05203-88cc-4a5f-8b18-eb990b5a6ca0\") " Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.422216 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities" (OuterVolumeSpecName: "utilities") pod "63e05203-88cc-4a5f-8b18-eb990b5a6ca0" (UID: "63e05203-88cc-4a5f-8b18-eb990b5a6ca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.430287 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv" (OuterVolumeSpecName: "kube-api-access-9fnsv") pod "63e05203-88cc-4a5f-8b18-eb990b5a6ca0" (UID: "63e05203-88cc-4a5f-8b18-eb990b5a6ca0"). InnerVolumeSpecName "kube-api-access-9fnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.468716 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63e05203-88cc-4a5f-8b18-eb990b5a6ca0" (UID: "63e05203-88cc-4a5f-8b18-eb990b5a6ca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.522779 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.522809 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.522819 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fnsv\" (UniqueName: \"kubernetes.io/projected/63e05203-88cc-4a5f-8b18-eb990b5a6ca0-kube-api-access-9fnsv\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.744812 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:06:58 crc kubenswrapper[4821]: I0930 17:06:58.745058 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ld6kq" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="registry-server" containerID="cri-o://f07c2374abeaefe1af2bf5af5abd30f83032397348fd0d711cf3ce18bfb48bcf" gracePeriod=2 Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.239714 4821 generic.go:334] "Generic (PLEG): container finished" podID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerID="f07c2374abeaefe1af2bf5af5abd30f83032397348fd0d711cf3ce18bfb48bcf" exitCode=0 Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.239780 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerDied","Data":"f07c2374abeaefe1af2bf5af5abd30f83032397348fd0d711cf3ce18bfb48bcf"} Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.242536 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpfbb" event={"ID":"63e05203-88cc-4a5f-8b18-eb990b5a6ca0","Type":"ContainerDied","Data":"d955432d1b32ced576740354ad3efb2869f542a45c1b43cab9375099cdd68eca"} Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.242592 4821 scope.go:117] "RemoveContainer" containerID="8a586e974a21af66cad4fa5d4b5c08c519c8449b8c97e65b481c18d6e8166e86" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.242608 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpfbb" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.256668 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.256797 4821 scope.go:117] "RemoveContainer" containerID="6e604cc38e886f81c578f15fb5fe3a920538f2162ae12bd69e5667a2d5bd1163" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.260526 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dpfbb"] Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.268224 4821 scope.go:117] "RemoveContainer" containerID="93e5b9a2bf59d5fae2b209f03d2c6d3b4e25edbf8207265bcdae68b1a74a87fe" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.743295 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.750424 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.837512 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities\") pod \"874864e1-57fd-4885-9714-cf9e3365b5c0\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.837581 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content\") pod \"874864e1-57fd-4885-9714-cf9e3365b5c0\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.837643 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dzms\" (UniqueName: \"kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms\") pod \"874864e1-57fd-4885-9714-cf9e3365b5c0\" (UID: \"874864e1-57fd-4885-9714-cf9e3365b5c0\") " Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.838854 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities" (OuterVolumeSpecName: "utilities") pod "874864e1-57fd-4885-9714-cf9e3365b5c0" (UID: "874864e1-57fd-4885-9714-cf9e3365b5c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.842638 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms" (OuterVolumeSpecName: "kube-api-access-9dzms") pod "874864e1-57fd-4885-9714-cf9e3365b5c0" (UID: "874864e1-57fd-4885-9714-cf9e3365b5c0"). InnerVolumeSpecName "kube-api-access-9dzms". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.896600 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "874864e1-57fd-4885-9714-cf9e3365b5c0" (UID: "874864e1-57fd-4885-9714-cf9e3365b5c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.939048 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.939104 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874864e1-57fd-4885-9714-cf9e3365b5c0-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:06:59 crc kubenswrapper[4821]: I0930 17:06:59.939117 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dzms\" (UniqueName: \"kubernetes.io/projected/874864e1-57fd-4885-9714-cf9e3365b5c0-kube-api-access-9dzms\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.249510 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rst9" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="registry-server" containerID="cri-o://1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b" gracePeriod=2 Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.250338 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ld6kq" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.255261 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ld6kq" event={"ID":"874864e1-57fd-4885-9714-cf9e3365b5c0","Type":"ContainerDied","Data":"3fa56ec4f47839ab5a85f2da6277ac6678175039a555dc081dbb52b2bcd129e0"} Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.255315 4821 scope.go:117] "RemoveContainer" containerID="f07c2374abeaefe1af2bf5af5abd30f83032397348fd0d711cf3ce18bfb48bcf" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.272981 4821 scope.go:117] "RemoveContainer" containerID="426593123a473bcd42a49e427e0860d80065b84f0e689c82ed05de8c23842442" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.287317 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.289228 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ld6kq"] Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.293639 4821 scope.go:117] "RemoveContainer" containerID="c98baffea75e9318c46095441b72f6888301c77c15ba279c130329293b814ce1" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.639710 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.713961 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" path="/var/lib/kubelet/pods/63e05203-88cc-4a5f-8b18-eb990b5a6ca0/volumes" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.714692 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" path="/var/lib/kubelet/pods/874864e1-57fd-4885-9714-cf9e3365b5c0/volumes" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.750361 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skshp\" (UniqueName: \"kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp\") pod \"d457d791-8e73-44b2-9292-33188e42571f\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.750447 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content\") pod \"d457d791-8e73-44b2-9292-33188e42571f\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.750534 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities\") pod \"d457d791-8e73-44b2-9292-33188e42571f\" (UID: \"d457d791-8e73-44b2-9292-33188e42571f\") " Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.752011 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities" (OuterVolumeSpecName: "utilities") pod "d457d791-8e73-44b2-9292-33188e42571f" (UID: "d457d791-8e73-44b2-9292-33188e42571f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.765128 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d457d791-8e73-44b2-9292-33188e42571f" (UID: "d457d791-8e73-44b2-9292-33188e42571f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.769427 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp" (OuterVolumeSpecName: "kube-api-access-skshp") pod "d457d791-8e73-44b2-9292-33188e42571f" (UID: "d457d791-8e73-44b2-9292-33188e42571f"). InnerVolumeSpecName "kube-api-access-skshp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.851860 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skshp\" (UniqueName: \"kubernetes.io/projected/d457d791-8e73-44b2-9292-33188e42571f-kube-api-access-skshp\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.851895 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:00 crc kubenswrapper[4821]: I0930 17:07:00.851905 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d457d791-8e73-44b2-9292-33188e42571f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.143979 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.144230 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7dvln" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="registry-server" containerID="cri-o://8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed" gracePeriod=2 Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.268052 4821 generic.go:334] "Generic (PLEG): container finished" podID="d457d791-8e73-44b2-9292-33188e42571f" containerID="1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b" exitCode=0 Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.268259 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerDied","Data":"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b"} Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.268577 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rst9" event={"ID":"d457d791-8e73-44b2-9292-33188e42571f","Type":"ContainerDied","Data":"8770e9e9cf8a7713d879e45b973ddff0fe4ec13b86adffd31aecfe3ea8c5c00d"} Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.268719 4821 scope.go:117] "RemoveContainer" containerID="1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.268334 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rst9" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.282715 4821 scope.go:117] "RemoveContainer" containerID="de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.303982 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.308150 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rst9"] Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.314995 4821 scope.go:117] "RemoveContainer" containerID="cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.328868 4821 scope.go:117] "RemoveContainer" containerID="1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b" Sep 30 17:07:01 crc kubenswrapper[4821]: E0930 17:07:01.330551 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b\": container with ID starting with 1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b not found: ID does not exist" containerID="1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.330727 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b"} err="failed to get container status \"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b\": rpc error: code = NotFound desc = could not find container \"1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b\": container with ID starting with 1cf9ae1ba4be21cc3638c60fd90d9db3be1abb70761febcb000b51293c2e182b not found: ID does not exist" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.330849 4821 scope.go:117] "RemoveContainer" containerID="de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc" Sep 30 17:07:01 crc kubenswrapper[4821]: E0930 17:07:01.333468 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc\": container with ID starting with de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc not found: ID does not exist" containerID="de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.333506 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc"} err="failed to get container status \"de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc\": rpc error: code = NotFound desc = could not find container \"de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc\": container with ID starting with de2b5c845c4b725cc8b6fca9069d2d12e5c99780fde817a9fef21771750631bc not found: ID does not exist" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.333534 4821 scope.go:117] "RemoveContainer" containerID="cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83" Sep 30 17:07:01 crc kubenswrapper[4821]: E0930 17:07:01.334140 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83\": container with ID starting with cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83 not found: ID does not exist" containerID="cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.334194 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83"} err="failed to get container status \"cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83\": rpc error: code = NotFound desc = could not find container \"cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83\": container with ID starting with cd0322eee7c43676804cd5889bbcac7266bd6bc06c03e68626946952b32cae83 not found: ID does not exist" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.496997 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.565468 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities\") pod \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.565517 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lb4\" (UniqueName: \"kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4\") pod \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.565556 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content\") pod \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\" (UID: \"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380\") " Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.566387 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities" (OuterVolumeSpecName: "utilities") pod "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" (UID: "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.568891 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4" (OuterVolumeSpecName: "kube-api-access-r9lb4") pod "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" (UID: "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380"). InnerVolumeSpecName "kube-api-access-r9lb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.647939 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" (UID: "4a04dbc2-4ac0-4926-96a5-aa5d11cf3380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.666863 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.666886 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lb4\" (UniqueName: \"kubernetes.io/projected/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-kube-api-access-r9lb4\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:01 crc kubenswrapper[4821]: I0930 17:07:01.666898 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.278672 4821 generic.go:334] "Generic (PLEG): container finished" podID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerID="8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed" exitCode=0 Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.278720 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerDied","Data":"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed"} Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.278745 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dvln" event={"ID":"4a04dbc2-4ac0-4926-96a5-aa5d11cf3380","Type":"ContainerDied","Data":"f56dc158676248c646e2507420976631cb75ad6ed8bd8018698cfb073229da8d"} Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.278765 4821 scope.go:117] "RemoveContainer" containerID="8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.278895 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dvln" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.298205 4821 scope.go:117] "RemoveContainer" containerID="40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.312634 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.317122 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7dvln"] Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.324421 4821 scope.go:117] "RemoveContainer" containerID="a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.337826 4821 scope.go:117] "RemoveContainer" containerID="8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed" Sep 30 17:07:02 crc kubenswrapper[4821]: E0930 17:07:02.338251 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed\": container with ID starting with 8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed not found: ID does not exist" containerID="8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.338284 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed"} err="failed to get container status \"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed\": rpc error: code = NotFound desc = could not find container \"8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed\": container with ID starting with 8d6612110b925076e9364ae4606982c25211990b8103ddd6cf08fb23ebb9aeed not found: ID does not exist" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.338310 4821 scope.go:117] "RemoveContainer" containerID="40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9" Sep 30 17:07:02 crc kubenswrapper[4821]: E0930 17:07:02.338700 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9\": container with ID starting with 40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9 not found: ID does not exist" containerID="40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.338728 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9"} err="failed to get container status \"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9\": rpc error: code = NotFound desc = could not find container \"40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9\": container with ID starting with 40aa81e93da67049e604f2b335c5eddc84bfb6a2f9cb130e007b761efc6f53d9 not found: ID does not exist" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.338745 4821 scope.go:117] "RemoveContainer" containerID="a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93" Sep 30 17:07:02 crc kubenswrapper[4821]: E0930 17:07:02.338962 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93\": container with ID starting with a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93 not found: ID does not exist" containerID="a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.338985 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93"} err="failed to get container status \"a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93\": rpc error: code = NotFound desc = could not find container \"a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93\": container with ID starting with a13b7287177a109fd86c2794ddad087f4d2b2bbfad59fe93f826aa9412fd3d93 not found: ID does not exist" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.713405 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" path="/var/lib/kubelet/pods/4a04dbc2-4ac0-4926-96a5-aa5d11cf3380/volumes" Sep 30 17:07:02 crc kubenswrapper[4821]: I0930 17:07:02.714024 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d457d791-8e73-44b2-9292-33188e42571f" path="/var/lib/kubelet/pods/d457d791-8e73-44b2-9292-33188e42571f/volumes" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.212920 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerName="oauth-openshift" containerID="cri-o://569a8812f160d01aa61146bdfe3779794d205a76b5153a5d16ee6cbca9a7b6f8" gracePeriod=15 Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.366641 4821 generic.go:334] "Generic (PLEG): container finished" podID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerID="569a8812f160d01aa61146bdfe3779794d205a76b5153a5d16ee6cbca9a7b6f8" exitCode=0 Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.366685 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" event={"ID":"d2c09848-ef88-4c0a-8ae5-fd0e9885956c","Type":"ContainerDied","Data":"569a8812f160d01aa61146bdfe3779794d205a76b5153a5d16ee6cbca9a7b6f8"} Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.600127 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633446 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-wbkqv"] Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633701 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633715 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633731 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633739 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633752 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633759 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633766 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633773 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633784 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633791 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633802 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633810 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633821 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerName="oauth-openshift" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633828 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerName="oauth-openshift" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633836 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab0936e-6bd0-4587-b7e1-76575578eec9" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633843 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab0936e-6bd0-4587-b7e1-76575578eec9" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633853 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633861 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633874 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633882 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633893 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633902 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633912 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633920 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633933 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fb6d19-7b78-4122-9989-0676a86c33dd" containerName="collect-profiles" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633940 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fb6d19-7b78-4122-9989-0676a86c33dd" containerName="collect-profiles" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633948 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633956 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633969 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633976 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="extract-utilities" Sep 30 17:07:18 crc kubenswrapper[4821]: E0930 17:07:18.633987 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.633994 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="extract-content" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634133 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" containerName="oauth-openshift" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634146 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e05203-88cc-4a5f-8b18-eb990b5a6ca0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634157 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d457d791-8e73-44b2-9292-33188e42571f" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634168 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fb6d19-7b78-4122-9989-0676a86c33dd" containerName="collect-profiles" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634177 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="874864e1-57fd-4885-9714-cf9e3365b5c0" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634185 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a04dbc2-4ac0-4926-96a5-aa5d11cf3380" containerName="registry-server" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634195 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="35656fe4-aab7-4d1e-b515-b33c8eb7ae8b" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.634205 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab0936e-6bd0-4587-b7e1-76575578eec9" containerName="pruner" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.637788 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.649065 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-wbkqv"] Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694140 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694191 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694214 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694267 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694285 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694309 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwwv\" (UniqueName: \"kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694324 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694346 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694367 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694391 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694406 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694440 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694469 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694488 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig\") pod \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\" (UID: \"d2c09848-ef88-4c0a-8ae5-fd0e9885956c\") " Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694659 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694681 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694700 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694715 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77zm\" (UniqueName: \"kubernetes.io/projected/5bae973a-1b57-4a80-b618-7ee56894fbe7-kube-api-access-d77zm\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694736 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694754 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694773 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-policies\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694801 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694825 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694841 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694858 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-dir\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694873 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694891 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.694908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.696262 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.696301 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.697039 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.697453 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.698001 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.701576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.713300 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.713313 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv" (OuterVolumeSpecName: "kube-api-access-7bwwv") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "kube-api-access-7bwwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.715584 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.715794 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.717996 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.719238 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.722302 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.723243 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d2c09848-ef88-4c0a-8ae5-fd0e9885956c" (UID: "d2c09848-ef88-4c0a-8ae5-fd0e9885956c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.797023 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.797203 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798351 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798483 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-dir\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798766 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798797 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.798883 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-dir\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799009 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799050 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799105 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77zm\" (UniqueName: \"kubernetes.io/projected/5bae973a-1b57-4a80-b618-7ee56894fbe7-kube-api-access-d77zm\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799220 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799267 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-policies\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.799393 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.800038 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.800499 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-audit-policies\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.801492 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.802949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803559 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803581 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803593 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803604 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803631 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwwv\" (UniqueName: \"kubernetes.io/projected/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-kube-api-access-7bwwv\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803641 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803653 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803665 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803728 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803756 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803774 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803814 4821 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-audit-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.803830 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d2c09848-ef88-4c0a-8ae5-fd0e9885956c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.804018 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.804467 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.804864 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.805254 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.806486 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.806808 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.806826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bae973a-1b57-4a80-b618-7ee56894fbe7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.815416 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77zm\" (UniqueName: \"kubernetes.io/projected/5bae973a-1b57-4a80-b618-7ee56894fbe7-kube-api-access-d77zm\") pod \"oauth-openshift-666545c866-wbkqv\" (UID: \"5bae973a-1b57-4a80-b618-7ee56894fbe7\") " pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:18 crc kubenswrapper[4821]: I0930 17:07:18.952066 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.184996 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-wbkqv"] Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.373528 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" event={"ID":"5bae973a-1b57-4a80-b618-7ee56894fbe7","Type":"ContainerStarted","Data":"623bfe3ae3bbedf13a8c6760b37d401b107eb4332fae19d34cb3760819d5c5f6"} Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.375063 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" event={"ID":"d2c09848-ef88-4c0a-8ae5-fd0e9885956c","Type":"ContainerDied","Data":"de5038cf20cfa96d8dddb608fc58d5adc84746130f197d0f5ac0d3f3f4ce3525"} Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.375114 4821 scope.go:117] "RemoveContainer" containerID="569a8812f160d01aa61146bdfe3779794d205a76b5153a5d16ee6cbca9a7b6f8" Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.375240 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qrpnr" Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.422197 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:07:19 crc kubenswrapper[4821]: I0930 17:07:19.424123 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qrpnr"] Sep 30 17:07:20 crc kubenswrapper[4821]: I0930 17:07:20.384623 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" event={"ID":"5bae973a-1b57-4a80-b618-7ee56894fbe7","Type":"ContainerStarted","Data":"de01aecb3f42808bbc8f21caf47b5e0781b2769fd5c0ddf78886b3a4b10374a0"} Sep 30 17:07:20 crc kubenswrapper[4821]: I0930 17:07:20.384904 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:20 crc kubenswrapper[4821]: I0930 17:07:20.392018 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" Sep 30 17:07:20 crc kubenswrapper[4821]: I0930 17:07:20.407505 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-wbkqv" podStartSLOduration=27.407482979 podStartE2EDuration="27.407482979s" podCreationTimestamp="2025-09-30 17:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:07:20.406299229 +0000 UTC m=+236.311345173" watchObservedRunningTime="2025-09-30 17:07:20.407482979 +0000 UTC m=+236.312528923" Sep 30 17:07:20 crc kubenswrapper[4821]: I0930 17:07:20.713819 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c09848-ef88-4c0a-8ae5-fd0e9885956c" path="/var/lib/kubelet/pods/d2c09848-ef88-4c0a-8ae5-fd0e9885956c/volumes" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.328198 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.328999 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bqx8" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="registry-server" containerID="cri-o://16da39e29f9bcdc4a1e85309b9f2f83d4be7aaf7a1814e7ca41cb236ac346b6e" gracePeriod=30 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.345241 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.345670 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jn2vp" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="registry-server" containerID="cri-o://d85d2ec0cda6163d6b9fec85bf88a11951bea26ebdb415c165cfb3c0fe47ce6d" gracePeriod=30 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.369933 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.370370 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" containerID="cri-o://3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244" gracePeriod=30 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.382438 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.382971 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvjkc" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="registry-server" containerID="cri-o://eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705" gracePeriod=30 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.392236 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.392640 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dn2dm" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="registry-server" containerID="cri-o://6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7" gracePeriod=30 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.397833 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6ghn"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.398667 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.416200 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6ghn"] Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.451750 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.451809 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.451836 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2p2\" (UniqueName: \"kubernetes.io/projected/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-kube-api-access-tt2p2\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.463588 4821 generic.go:334] "Generic (PLEG): container finished" podID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerID="16da39e29f9bcdc4a1e85309b9f2f83d4be7aaf7a1814e7ca41cb236ac346b6e" exitCode=0 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.463642 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerDied","Data":"16da39e29f9bcdc4a1e85309b9f2f83d4be7aaf7a1814e7ca41cb236ac346b6e"} Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.465596 4821 generic.go:334] "Generic (PLEG): container finished" podID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerID="d85d2ec0cda6163d6b9fec85bf88a11951bea26ebdb415c165cfb3c0fe47ce6d" exitCode=0 Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.465622 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerDied","Data":"d85d2ec0cda6163d6b9fec85bf88a11951bea26ebdb415c165cfb3c0fe47ce6d"} Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.552574 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.552915 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2p2\" (UniqueName: \"kubernetes.io/projected/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-kube-api-access-tt2p2\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.553025 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.554367 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.566918 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.573101 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2p2\" (UniqueName: \"kubernetes.io/projected/8b3d34e8-81c3-4214-a3d9-a3d787b69b9a-kube-api-access-tt2p2\") pod \"marketplace-operator-79b997595-g6ghn\" (UID: \"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.700457 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.713218 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.755464 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content\") pod \"7700fbde-8552-4aa1-b6e9-910bf3a45207\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.755540 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities\") pod \"7700fbde-8552-4aa1-b6e9-910bf3a45207\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.755620 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5sbn\" (UniqueName: \"kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn\") pod \"7700fbde-8552-4aa1-b6e9-910bf3a45207\" (UID: \"7700fbde-8552-4aa1-b6e9-910bf3a45207\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.758723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities" (OuterVolumeSpecName: "utilities") pod "7700fbde-8552-4aa1-b6e9-910bf3a45207" (UID: "7700fbde-8552-4aa1-b6e9-910bf3a45207"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.764830 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn" (OuterVolumeSpecName: "kube-api-access-v5sbn") pod "7700fbde-8552-4aa1-b6e9-910bf3a45207" (UID: "7700fbde-8552-4aa1-b6e9-910bf3a45207"). InnerVolumeSpecName "kube-api-access-v5sbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.815783 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.828120 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7700fbde-8552-4aa1-b6e9-910bf3a45207" (UID: "7700fbde-8552-4aa1-b6e9-910bf3a45207"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857284 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content\") pod \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857321 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tf5r\" (UniqueName: \"kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r\") pod \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857375 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities\") pod \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\" (UID: \"5aa5939d-5ace-49e7-a2ba-b028cf241b02\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857615 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5sbn\" (UniqueName: \"kubernetes.io/projected/7700fbde-8552-4aa1-b6e9-910bf3a45207-kube-api-access-v5sbn\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857627 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.857635 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7700fbde-8552-4aa1-b6e9-910bf3a45207-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.858205 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities" (OuterVolumeSpecName: "utilities") pod "5aa5939d-5ace-49e7-a2ba-b028cf241b02" (UID: "5aa5939d-5ace-49e7-a2ba-b028cf241b02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.867985 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r" (OuterVolumeSpecName: "kube-api-access-7tf5r") pod "5aa5939d-5ace-49e7-a2ba-b028cf241b02" (UID: "5aa5939d-5ace-49e7-a2ba-b028cf241b02"). InnerVolumeSpecName "kube-api-access-7tf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.896130 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.934581 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.947139 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.962519 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca\") pod \"f84add95-1bc2-4534-93aa-bba177335e74\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.962942 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7nvw\" (UniqueName: \"kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw\") pod \"f84add95-1bc2-4534-93aa-bba177335e74\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.962975 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics\") pod \"f84add95-1bc2-4534-93aa-bba177335e74\" (UID: \"f84add95-1bc2-4534-93aa-bba177335e74\") " Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.963262 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tf5r\" (UniqueName: \"kubernetes.io/projected/5aa5939d-5ace-49e7-a2ba-b028cf241b02-kube-api-access-7tf5r\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.963276 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.964861 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5aa5939d-5ace-49e7-a2ba-b028cf241b02" (UID: "5aa5939d-5ace-49e7-a2ba-b028cf241b02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.965620 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f84add95-1bc2-4534-93aa-bba177335e74" (UID: "f84add95-1bc2-4534-93aa-bba177335e74"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.972027 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f84add95-1bc2-4534-93aa-bba177335e74" (UID: "f84add95-1bc2-4534-93aa-bba177335e74"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:07:30 crc kubenswrapper[4821]: I0930 17:07:30.977220 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw" (OuterVolumeSpecName: "kube-api-access-f7nvw") pod "f84add95-1bc2-4534-93aa-bba177335e74" (UID: "f84add95-1bc2-4534-93aa-bba177335e74"). InnerVolumeSpecName "kube-api-access-f7nvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064007 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities\") pod \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064121 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content\") pod \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064148 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content\") pod \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064195 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zsmc\" (UniqueName: \"kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc\") pod \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\" (UID: \"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064240 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities\") pod \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064285 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n7pt\" (UniqueName: \"kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt\") pod \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\" (UID: \"7d7a822d-04df-4ce0-bc18-15bc2195f18e\") " Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064492 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064504 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa5939d-5ace-49e7-a2ba-b028cf241b02-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064513 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7nvw\" (UniqueName: \"kubernetes.io/projected/f84add95-1bc2-4534-93aa-bba177335e74-kube-api-access-f7nvw\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064521 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f84add95-1bc2-4534-93aa-bba177335e74-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.064832 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities" (OuterVolumeSpecName: "utilities") pod "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" (UID: "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.065521 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities" (OuterVolumeSpecName: "utilities") pod "7d7a822d-04df-4ce0-bc18-15bc2195f18e" (UID: "7d7a822d-04df-4ce0-bc18-15bc2195f18e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.066815 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt" (OuterVolumeSpecName: "kube-api-access-6n7pt") pod "7d7a822d-04df-4ce0-bc18-15bc2195f18e" (UID: "7d7a822d-04df-4ce0-bc18-15bc2195f18e"). InnerVolumeSpecName "kube-api-access-6n7pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.068299 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc" (OuterVolumeSpecName: "kube-api-access-7zsmc") pod "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" (UID: "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368"). InnerVolumeSpecName "kube-api-access-7zsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.079887 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" (UID: "c07e1e4d-c8fa-48d6-a138-3c42ccf2e368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.143629 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d7a822d-04df-4ce0-bc18-15bc2195f18e" (UID: "7d7a822d-04df-4ce0-bc18-15bc2195f18e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166017 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zsmc\" (UniqueName: \"kubernetes.io/projected/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-kube-api-access-7zsmc\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166046 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166120 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n7pt\" (UniqueName: \"kubernetes.io/projected/7d7a822d-04df-4ce0-bc18-15bc2195f18e-kube-api-access-6n7pt\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166138 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166148 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.166158 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7a822d-04df-4ce0-bc18-15bc2195f18e-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.212211 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6ghn"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.471275 4821 generic.go:334] "Generic (PLEG): container finished" podID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerID="eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705" exitCode=0 Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.471340 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerDied","Data":"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.471366 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvjkc" event={"ID":"c07e1e4d-c8fa-48d6-a138-3c42ccf2e368","Type":"ContainerDied","Data":"7d7c4e58c949ec2c221c39406ff01e2bcffbf5412bc867e0b87f4b2592596a03"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.471386 4821 scope.go:117] "RemoveContainer" containerID="eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.471495 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvjkc" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.476375 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" event={"ID":"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a","Type":"ContainerStarted","Data":"8c725a95a4a86e6924a7e84c19f60e9c70e2561e1c8f9a3a7a1d66a218debe2b"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.476421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" event={"ID":"8b3d34e8-81c3-4214-a3d9-a3d787b69b9a","Type":"ContainerStarted","Data":"43748d55614ad154e97ecbf465351976581e554b2edfa5f7b3d8dfaa7121cf4d"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.476609 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.478175 4821 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g6ghn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.478217 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" podUID="8b3d34e8-81c3-4214-a3d9-a3d787b69b9a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.482204 4821 generic.go:334] "Generic (PLEG): container finished" podID="f84add95-1bc2-4534-93aa-bba177335e74" containerID="3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244" exitCode=0 Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.482294 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" event={"ID":"f84add95-1bc2-4534-93aa-bba177335e74","Type":"ContainerDied","Data":"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.482326 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" event={"ID":"f84add95-1bc2-4534-93aa-bba177335e74","Type":"ContainerDied","Data":"8a79ecf4d761ccbe7ade9bb0c88cf6d2430052499f690eded76c472c42b5c340"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.482562 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-854tr" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.485294 4821 generic.go:334] "Generic (PLEG): container finished" podID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerID="6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7" exitCode=0 Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.485343 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerDied","Data":"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.485376 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dn2dm" event={"ID":"7d7a822d-04df-4ce0-bc18-15bc2195f18e","Type":"ContainerDied","Data":"f5dc18a04bc4db03eb2b88cb9bff2cc81aeb453318bfeaca62bfde1269aecbc4"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.485421 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dn2dm" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.487337 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn2vp" event={"ID":"7700fbde-8552-4aa1-b6e9-910bf3a45207","Type":"ContainerDied","Data":"3373a4a5b492b5df76b5d7dd0a26b89b357e0c5c1dc012f35b82f745363b7f5a"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.487416 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn2vp" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.489840 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bqx8" event={"ID":"5aa5939d-5ace-49e7-a2ba-b028cf241b02","Type":"ContainerDied","Data":"bcfa889f70f001330c0ef8e77cd0b3bfa551be79e028ea5f232a78f9fe8e8b62"} Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.489939 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bqx8" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.547101 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" podStartSLOduration=1.5470707030000002 podStartE2EDuration="1.547070703s" podCreationTimestamp="2025-09-30 17:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:07:31.494660582 +0000 UTC m=+247.399706536" watchObservedRunningTime="2025-09-30 17:07:31.547070703 +0000 UTC m=+247.452116637" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.549335 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.556500 4821 scope.go:117] "RemoveContainer" containerID="c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.560718 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvjkc"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.575482 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.587420 4821 scope.go:117] "RemoveContainer" containerID="10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.591882 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-854tr"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.606989 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.616452 4821 scope.go:117] "RemoveContainer" containerID="eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.618436 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705\": container with ID starting with eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705 not found: ID does not exist" containerID="eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.618490 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705"} err="failed to get container status \"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705\": rpc error: code = NotFound desc = could not find container \"eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705\": container with ID starting with eec0b1746d738d7c3d6385ceff8769a119b4dfb0b9462e3682ca18a4b5db1705 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.618507 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jn2vp"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.618522 4821 scope.go:117] "RemoveContainer" containerID="c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.618971 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4\": container with ID starting with c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4 not found: ID does not exist" containerID="c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.619001 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4"} err="failed to get container status \"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4\": rpc error: code = NotFound desc = could not find container \"c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4\": container with ID starting with c7ab21000379865292258f3c9d64c10801d46f708e3b3848164e9d1f21823db4 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.619024 4821 scope.go:117] "RemoveContainer" containerID="10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.619332 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6\": container with ID starting with 10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6 not found: ID does not exist" containerID="10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.619370 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6"} err="failed to get container status \"10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6\": rpc error: code = NotFound desc = could not find container \"10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6\": container with ID starting with 10551ec087af527365481ca7f746523b78dbfd7a94b0f25a61ed7a5b9964aad6 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.619395 4821 scope.go:117] "RemoveContainer" containerID="3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.626038 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.630010 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dn2dm"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.635299 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.638089 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bqx8"] Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.638230 4821 scope.go:117] "RemoveContainer" containerID="3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.640290 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244\": container with ID starting with 3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244 not found: ID does not exist" containerID="3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.640337 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244"} err="failed to get container status \"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244\": rpc error: code = NotFound desc = could not find container \"3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244\": container with ID starting with 3c6dab9bbdb91a2efba542dc2f9da844a3e1c5eea152dbe8ccaec960c33d2244 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.640367 4821 scope.go:117] "RemoveContainer" containerID="6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.657740 4821 scope.go:117] "RemoveContainer" containerID="fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.671384 4821 scope.go:117] "RemoveContainer" containerID="a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.694478 4821 scope.go:117] "RemoveContainer" containerID="6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.697955 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7\": container with ID starting with 6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7 not found: ID does not exist" containerID="6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.698022 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7"} err="failed to get container status \"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7\": rpc error: code = NotFound desc = could not find container \"6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7\": container with ID starting with 6c50c125cd07e97a671160725289be2a6cb600fe7940285779715c67acfd21e7 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.698062 4821 scope.go:117] "RemoveContainer" containerID="fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.701910 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d\": container with ID starting with fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d not found: ID does not exist" containerID="fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.701946 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d"} err="failed to get container status \"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d\": rpc error: code = NotFound desc = could not find container \"fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d\": container with ID starting with fb09e9fbd945ca04460a41d2f4512d1296147479e062f8c8b0e99931c37f727d not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.701986 4821 scope.go:117] "RemoveContainer" containerID="a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284" Sep 30 17:07:31 crc kubenswrapper[4821]: E0930 17:07:31.702308 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284\": container with ID starting with a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284 not found: ID does not exist" containerID="a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.702350 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284"} err="failed to get container status \"a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284\": rpc error: code = NotFound desc = could not find container \"a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284\": container with ID starting with a660b5d6cbd17f57e0c532301d1036f4b72f7ea42842614d275913383d130284 not found: ID does not exist" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.702378 4821 scope.go:117] "RemoveContainer" containerID="d85d2ec0cda6163d6b9fec85bf88a11951bea26ebdb415c165cfb3c0fe47ce6d" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.720053 4821 scope.go:117] "RemoveContainer" containerID="ee61d309a86d52d116a5fe3a41a93205a0a75aeb85ff311610cec02545c55f42" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.734411 4821 scope.go:117] "RemoveContainer" containerID="afb832c8e385a199de9234050582200f5d2b6ced531f3abfba198e92c9ab6b30" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.748589 4821 scope.go:117] "RemoveContainer" containerID="16da39e29f9bcdc4a1e85309b9f2f83d4be7aaf7a1814e7ca41cb236ac346b6e" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.761907 4821 scope.go:117] "RemoveContainer" containerID="5094e369a588736b2b78fd93dda9496c7e8ad65a5335aec91a92a918d9ddb136" Sep 30 17:07:31 crc kubenswrapper[4821]: I0930 17:07:31.774664 4821 scope.go:117] "RemoveContainer" containerID="7e86de19397572a2a4a20fb54f11e2b1d0a44fb62f34a13a558275d1ae811d62" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.504490 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g6ghn" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546545 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nm7zd"] Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546731 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546742 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546752 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546757 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546765 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546772 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546780 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546785 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546793 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546799 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546806 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546811 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546819 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546825 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546836 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546843 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546852 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546858 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546868 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546873 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546884 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546889 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="extract-content" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546896 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546901 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="extract-utilities" Sep 30 17:07:32 crc kubenswrapper[4821]: E0930 17:07:32.546908 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546914 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546992 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84add95-1bc2-4534-93aa-bba177335e74" containerName="marketplace-operator" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.546999 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.547008 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.547018 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.547027 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" containerName="registry-server" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.547685 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.549826 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.559397 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nm7zd"] Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.687013 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-catalog-content\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.687139 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-utilities\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.687173 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9ss\" (UniqueName: \"kubernetes.io/projected/b2baa7a3-2088-4b6b-8bef-d629dc402b87-kube-api-access-fg9ss\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.724249 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa5939d-5ace-49e7-a2ba-b028cf241b02" path="/var/lib/kubelet/pods/5aa5939d-5ace-49e7-a2ba-b028cf241b02/volumes" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.725453 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7700fbde-8552-4aa1-b6e9-910bf3a45207" path="/var/lib/kubelet/pods/7700fbde-8552-4aa1-b6e9-910bf3a45207/volumes" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.726305 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7a822d-04df-4ce0-bc18-15bc2195f18e" path="/var/lib/kubelet/pods/7d7a822d-04df-4ce0-bc18-15bc2195f18e/volumes" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.727737 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07e1e4d-c8fa-48d6-a138-3c42ccf2e368" path="/var/lib/kubelet/pods/c07e1e4d-c8fa-48d6-a138-3c42ccf2e368/volumes" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.728652 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84add95-1bc2-4534-93aa-bba177335e74" path="/var/lib/kubelet/pods/f84add95-1bc2-4534-93aa-bba177335e74/volumes" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.744110 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljdkf"] Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.745326 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.747983 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.761492 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdkf"] Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788662 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-utilities\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788712 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-utilities\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788747 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9ss\" (UniqueName: \"kubernetes.io/projected/b2baa7a3-2088-4b6b-8bef-d629dc402b87-kube-api-access-fg9ss\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788768 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmprc\" (UniqueName: \"kubernetes.io/projected/5dac66f0-9520-438e-aefe-321f0a63733e-kube-api-access-pmprc\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788825 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-catalog-content\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.788846 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-catalog-content\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.789382 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-utilities\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.789740 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2baa7a3-2088-4b6b-8bef-d629dc402b87-catalog-content\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.819307 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9ss\" (UniqueName: \"kubernetes.io/projected/b2baa7a3-2088-4b6b-8bef-d629dc402b87-kube-api-access-fg9ss\") pod \"certified-operators-nm7zd\" (UID: \"b2baa7a3-2088-4b6b-8bef-d629dc402b87\") " pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.864571 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.890069 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-utilities\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.890154 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmprc\" (UniqueName: \"kubernetes.io/projected/5dac66f0-9520-438e-aefe-321f0a63733e-kube-api-access-pmprc\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.890222 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-catalog-content\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.890873 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-utilities\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.890977 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dac66f0-9520-438e-aefe-321f0a63733e-catalog-content\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:32 crc kubenswrapper[4821]: I0930 17:07:32.905538 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmprc\" (UniqueName: \"kubernetes.io/projected/5dac66f0-9520-438e-aefe-321f0a63733e-kube-api-access-pmprc\") pod \"community-operators-ljdkf\" (UID: \"5dac66f0-9520-438e-aefe-321f0a63733e\") " pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.063284 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.255232 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nm7zd"] Sep 30 17:07:33 crc kubenswrapper[4821]: W0930 17:07:33.260242 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2baa7a3_2088_4b6b_8bef_d629dc402b87.slice/crio-272869eecb2490cad98f049635d098062721b49412af90064df2caceff868549 WatchSource:0}: Error finding container 272869eecb2490cad98f049635d098062721b49412af90064df2caceff868549: Status 404 returned error can't find the container with id 272869eecb2490cad98f049635d098062721b49412af90064df2caceff868549 Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.446206 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdkf"] Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.505924 4821 generic.go:334] "Generic (PLEG): container finished" podID="b2baa7a3-2088-4b6b-8bef-d629dc402b87" containerID="284dae812547364f629cba6d5398aae4671dd9a1a9852d7c1e9c74019d85541f" exitCode=0 Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.506002 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nm7zd" event={"ID":"b2baa7a3-2088-4b6b-8bef-d629dc402b87","Type":"ContainerDied","Data":"284dae812547364f629cba6d5398aae4671dd9a1a9852d7c1e9c74019d85541f"} Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.506029 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nm7zd" event={"ID":"b2baa7a3-2088-4b6b-8bef-d629dc402b87","Type":"ContainerStarted","Data":"272869eecb2490cad98f049635d098062721b49412af90064df2caceff868549"} Sep 30 17:07:33 crc kubenswrapper[4821]: I0930 17:07:33.507100 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdkf" event={"ID":"5dac66f0-9520-438e-aefe-321f0a63733e","Type":"ContainerStarted","Data":"9525e68c093e13215399bb23e31c477feab8c4d76f471017bd27a68ad1145d15"} Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.521726 4821 generic.go:334] "Generic (PLEG): container finished" podID="5dac66f0-9520-438e-aefe-321f0a63733e" containerID="936b0339eac1eb26c63bf616cb12367ab6cf9f159d2522df98bda85f35eff2a6" exitCode=0 Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.521845 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdkf" event={"ID":"5dac66f0-9520-438e-aefe-321f0a63733e","Type":"ContainerDied","Data":"936b0339eac1eb26c63bf616cb12367ab6cf9f159d2522df98bda85f35eff2a6"} Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.530517 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nm7zd" event={"ID":"b2baa7a3-2088-4b6b-8bef-d629dc402b87","Type":"ContainerStarted","Data":"cb01018e402a7dea1421274a146a575fa0ea7e4736b6696361594925e9e21e9a"} Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.945691 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9gmq"] Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.946707 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.951332 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 17:07:34 crc kubenswrapper[4821]: I0930 17:07:34.953833 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9gmq"] Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.017649 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-utilities\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.017694 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-catalog-content\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.017882 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhsj\" (UniqueName: \"kubernetes.io/projected/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-kube-api-access-7xhsj\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.118677 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhsj\" (UniqueName: \"kubernetes.io/projected/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-kube-api-access-7xhsj\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.118750 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-utilities\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.118768 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-catalog-content\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.119249 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-catalog-content\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.120407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-utilities\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.140000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhsj\" (UniqueName: \"kubernetes.io/projected/a4dcc63c-3f2c-413b-a521-ef2edb6d45bd-kube-api-access-7xhsj\") pod \"redhat-marketplace-f9gmq\" (UID: \"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd\") " pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.154062 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6d9b"] Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.156264 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.158439 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.159774 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6d9b"] Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.219901 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26b7\" (UniqueName: \"kubernetes.io/projected/68f64d5d-0c46-4199-977c-a9d7820a9c80-kube-api-access-k26b7\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.220002 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-catalog-content\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.220239 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-utilities\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.270773 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.321059 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26b7\" (UniqueName: \"kubernetes.io/projected/68f64d5d-0c46-4199-977c-a9d7820a9c80-kube-api-access-k26b7\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.321183 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-catalog-content\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.321239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-utilities\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.321715 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-utilities\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.322290 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f64d5d-0c46-4199-977c-a9d7820a9c80-catalog-content\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.352798 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26b7\" (UniqueName: \"kubernetes.io/projected/68f64d5d-0c46-4199-977c-a9d7820a9c80-kube-api-access-k26b7\") pod \"redhat-operators-c6d9b\" (UID: \"68f64d5d-0c46-4199-977c-a9d7820a9c80\") " pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.495885 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.539172 4821 generic.go:334] "Generic (PLEG): container finished" podID="5dac66f0-9520-438e-aefe-321f0a63733e" containerID="ee3925f964564843c070a0f417a4fb64cd4c234ca5a849e7ae9a39d19ae9388a" exitCode=0 Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.539245 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdkf" event={"ID":"5dac66f0-9520-438e-aefe-321f0a63733e","Type":"ContainerDied","Data":"ee3925f964564843c070a0f417a4fb64cd4c234ca5a849e7ae9a39d19ae9388a"} Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.542074 4821 generic.go:334] "Generic (PLEG): container finished" podID="b2baa7a3-2088-4b6b-8bef-d629dc402b87" containerID="cb01018e402a7dea1421274a146a575fa0ea7e4736b6696361594925e9e21e9a" exitCode=0 Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.542162 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nm7zd" event={"ID":"b2baa7a3-2088-4b6b-8bef-d629dc402b87","Type":"ContainerDied","Data":"cb01018e402a7dea1421274a146a575fa0ea7e4736b6696361594925e9e21e9a"} Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.696543 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9gmq"] Sep 30 17:07:35 crc kubenswrapper[4821]: I0930 17:07:35.870189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6d9b"] Sep 30 17:07:35 crc kubenswrapper[4821]: W0930 17:07:35.881066 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68f64d5d_0c46_4199_977c_a9d7820a9c80.slice/crio-6fa423c0f0cfdebe8ff17f47e3ab6a7ca5a2af214fe8da22a72b37e508ddfaaa WatchSource:0}: Error finding container 6fa423c0f0cfdebe8ff17f47e3ab6a7ca5a2af214fe8da22a72b37e508ddfaaa: Status 404 returned error can't find the container with id 6fa423c0f0cfdebe8ff17f47e3ab6a7ca5a2af214fe8da22a72b37e508ddfaaa Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.561189 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nm7zd" event={"ID":"b2baa7a3-2088-4b6b-8bef-d629dc402b87","Type":"ContainerStarted","Data":"d01ab7a2e6e5db4494d0e6c8b51f7b114a13ea0a5a402549b305a97748fd988b"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.565217 4821 generic.go:334] "Generic (PLEG): container finished" podID="68f64d5d-0c46-4199-977c-a9d7820a9c80" containerID="3b19819fe54636e118ef01a796724708c78bd6d918e68148794d5b02f7a61303" exitCode=0 Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.565284 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6d9b" event={"ID":"68f64d5d-0c46-4199-977c-a9d7820a9c80","Type":"ContainerDied","Data":"3b19819fe54636e118ef01a796724708c78bd6d918e68148794d5b02f7a61303"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.565312 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6d9b" event={"ID":"68f64d5d-0c46-4199-977c-a9d7820a9c80","Type":"ContainerStarted","Data":"6fa423c0f0cfdebe8ff17f47e3ab6a7ca5a2af214fe8da22a72b37e508ddfaaa"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.576373 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdkf" event={"ID":"5dac66f0-9520-438e-aefe-321f0a63733e","Type":"ContainerStarted","Data":"967c48c91de9527fd2dfe335be9bced0e515e977d2107cc8f15df7963b5904cc"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.580658 4821 generic.go:334] "Generic (PLEG): container finished" podID="a4dcc63c-3f2c-413b-a521-ef2edb6d45bd" containerID="14ec850b61be2c8fe848065b6b48993d67317bf6506de6f44ac0be6b648e1a3e" exitCode=0 Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.580709 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9gmq" event={"ID":"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd","Type":"ContainerDied","Data":"14ec850b61be2c8fe848065b6b48993d67317bf6506de6f44ac0be6b648e1a3e"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.580737 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9gmq" event={"ID":"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd","Type":"ContainerStarted","Data":"e32299fb6913bd09708f6c4418bebc2eea59a59c2e30fa8e480a546cd2ea58b5"} Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.584215 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nm7zd" podStartSLOduration=1.9704524079999999 podStartE2EDuration="4.584198715s" podCreationTimestamp="2025-09-30 17:07:32 +0000 UTC" firstStartedPulling="2025-09-30 17:07:33.509032196 +0000 UTC m=+249.414078140" lastFinishedPulling="2025-09-30 17:07:36.122778503 +0000 UTC m=+252.027824447" observedRunningTime="2025-09-30 17:07:36.583677011 +0000 UTC m=+252.488722955" watchObservedRunningTime="2025-09-30 17:07:36.584198715 +0000 UTC m=+252.489244669" Sep 30 17:07:36 crc kubenswrapper[4821]: I0930 17:07:36.627299 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljdkf" podStartSLOduration=3.150899315 podStartE2EDuration="4.627282647s" podCreationTimestamp="2025-09-30 17:07:32 +0000 UTC" firstStartedPulling="2025-09-30 17:07:34.524610713 +0000 UTC m=+250.429656657" lastFinishedPulling="2025-09-30 17:07:36.000994025 +0000 UTC m=+251.906039989" observedRunningTime="2025-09-30 17:07:36.62544162 +0000 UTC m=+252.530487564" watchObservedRunningTime="2025-09-30 17:07:36.627282647 +0000 UTC m=+252.532328591" Sep 30 17:07:37 crc kubenswrapper[4821]: I0930 17:07:37.586417 4821 generic.go:334] "Generic (PLEG): container finished" podID="a4dcc63c-3f2c-413b-a521-ef2edb6d45bd" containerID="ec5f670e7afb637a301c980e0769034332c1dde2649988ccc349311da4168b94" exitCode=0 Sep 30 17:07:37 crc kubenswrapper[4821]: I0930 17:07:37.587832 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9gmq" event={"ID":"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd","Type":"ContainerDied","Data":"ec5f670e7afb637a301c980e0769034332c1dde2649988ccc349311da4168b94"} Sep 30 17:07:38 crc kubenswrapper[4821]: I0930 17:07:38.593071 4821 generic.go:334] "Generic (PLEG): container finished" podID="68f64d5d-0c46-4199-977c-a9d7820a9c80" containerID="e0772ade1326c96664c45374be0acd25f0a0e254861bcb5705c637324760c3ba" exitCode=0 Sep 30 17:07:38 crc kubenswrapper[4821]: I0930 17:07:38.593163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6d9b" event={"ID":"68f64d5d-0c46-4199-977c-a9d7820a9c80","Type":"ContainerDied","Data":"e0772ade1326c96664c45374be0acd25f0a0e254861bcb5705c637324760c3ba"} Sep 30 17:07:39 crc kubenswrapper[4821]: I0930 17:07:39.599402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9gmq" event={"ID":"a4dcc63c-3f2c-413b-a521-ef2edb6d45bd","Type":"ContainerStarted","Data":"c8e59889a9e7bf41be5823d03d571a8c6e54717eca025fe96a784c60c0a357fd"} Sep 30 17:07:39 crc kubenswrapper[4821]: I0930 17:07:39.601845 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6d9b" event={"ID":"68f64d5d-0c46-4199-977c-a9d7820a9c80","Type":"ContainerStarted","Data":"ae8671641120e997c47480b5d34b2447ed223a9e069cced298f338a5ddb4cfad"} Sep 30 17:07:39 crc kubenswrapper[4821]: I0930 17:07:39.618111 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9gmq" podStartSLOduration=4.165531863 podStartE2EDuration="5.618090876s" podCreationTimestamp="2025-09-30 17:07:34 +0000 UTC" firstStartedPulling="2025-09-30 17:07:36.583351162 +0000 UTC m=+252.488397106" lastFinishedPulling="2025-09-30 17:07:38.035910175 +0000 UTC m=+253.940956119" observedRunningTime="2025-09-30 17:07:39.616314981 +0000 UTC m=+255.521360925" watchObservedRunningTime="2025-09-30 17:07:39.618090876 +0000 UTC m=+255.523136820" Sep 30 17:07:42 crc kubenswrapper[4821]: I0930 17:07:42.865624 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:42 crc kubenswrapper[4821]: I0930 17:07:42.866198 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:42 crc kubenswrapper[4821]: I0930 17:07:42.908690 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:42 crc kubenswrapper[4821]: I0930 17:07:42.924429 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6d9b" podStartSLOduration=5.414050912 podStartE2EDuration="7.924412193s" podCreationTimestamp="2025-09-30 17:07:35 +0000 UTC" firstStartedPulling="2025-09-30 17:07:36.567441086 +0000 UTC m=+252.472487030" lastFinishedPulling="2025-09-30 17:07:39.077802367 +0000 UTC m=+254.982848311" observedRunningTime="2025-09-30 17:07:39.63739002 +0000 UTC m=+255.542435964" watchObservedRunningTime="2025-09-30 17:07:42.924412193 +0000 UTC m=+258.829458137" Sep 30 17:07:43 crc kubenswrapper[4821]: I0930 17:07:43.064686 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:43 crc kubenswrapper[4821]: I0930 17:07:43.064986 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:43 crc kubenswrapper[4821]: I0930 17:07:43.098946 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:43 crc kubenswrapper[4821]: I0930 17:07:43.663889 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nm7zd" Sep 30 17:07:43 crc kubenswrapper[4821]: I0930 17:07:43.671590 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljdkf" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.271647 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.273152 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.304417 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.496767 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.497481 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.537455 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.667489 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6d9b" Sep 30 17:07:45 crc kubenswrapper[4821]: I0930 17:07:45.677412 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9gmq" Sep 30 17:09:19 crc kubenswrapper[4821]: I0930 17:09:19.350328 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:09:19 crc kubenswrapper[4821]: I0930 17:09:19.350759 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:09:49 crc kubenswrapper[4821]: I0930 17:09:49.350294 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:09:49 crc kubenswrapper[4821]: I0930 17:09:49.350723 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.741168 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sjl7s"] Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.743800 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.792035 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sjl7s"] Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868548 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67ln\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-kube-api-access-x67ln\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868631 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868673 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-certificates\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868710 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-tls\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868749 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2fcdb48f-270d-4b71-b90b-c0bb21676381-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868779 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2fcdb48f-270d-4b71-b90b-c0bb21676381-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-trusted-ca\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.868878 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-bound-sa-token\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.898992 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970018 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2fcdb48f-270d-4b71-b90b-c0bb21676381-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970063 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2fcdb48f-270d-4b71-b90b-c0bb21676381-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970114 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-trusted-ca\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970134 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-bound-sa-token\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970158 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x67ln\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-kube-api-access-x67ln\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970188 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-certificates\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-tls\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.970636 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2fcdb48f-270d-4b71-b90b-c0bb21676381-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.971495 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-certificates\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.972057 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fcdb48f-270d-4b71-b90b-c0bb21676381-trusted-ca\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.975664 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-registry-tls\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.976034 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2fcdb48f-270d-4b71-b90b-c0bb21676381-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.988612 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-bound-sa-token\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:06 crc kubenswrapper[4821]: I0930 17:10:06.988796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x67ln\" (UniqueName: \"kubernetes.io/projected/2fcdb48f-270d-4b71-b90b-c0bb21676381-kube-api-access-x67ln\") pod \"image-registry-66df7c8f76-sjl7s\" (UID: \"2fcdb48f-270d-4b71-b90b-c0bb21676381\") " pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.062379 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.232851 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sjl7s"] Sep 30 17:10:07 crc kubenswrapper[4821]: W0930 17:10:07.241520 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcdb48f_270d_4b71_b90b_c0bb21676381.slice/crio-1f27739b56cb61c3b774f11aa8885344ed5ad16c4439ec15492d93daa6e76b5a WatchSource:0}: Error finding container 1f27739b56cb61c3b774f11aa8885344ed5ad16c4439ec15492d93daa6e76b5a: Status 404 returned error can't find the container with id 1f27739b56cb61c3b774f11aa8885344ed5ad16c4439ec15492d93daa6e76b5a Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.390560 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" event={"ID":"2fcdb48f-270d-4b71-b90b-c0bb21676381","Type":"ContainerStarted","Data":"791ee0394f59b2c94691f65a4d64c382892952601e82b918926c9d0427165ef1"} Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.390610 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" event={"ID":"2fcdb48f-270d-4b71-b90b-c0bb21676381","Type":"ContainerStarted","Data":"1f27739b56cb61c3b774f11aa8885344ed5ad16c4439ec15492d93daa6e76b5a"} Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.390767 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:07 crc kubenswrapper[4821]: I0930 17:10:07.408777 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" podStartSLOduration=1.408760636 podStartE2EDuration="1.408760636s" podCreationTimestamp="2025-09-30 17:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:10:07.405932225 +0000 UTC m=+403.310978169" watchObservedRunningTime="2025-09-30 17:10:07.408760636 +0000 UTC m=+403.313806570" Sep 30 17:10:19 crc kubenswrapper[4821]: I0930 17:10:19.349485 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:10:19 crc kubenswrapper[4821]: I0930 17:10:19.350063 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:10:19 crc kubenswrapper[4821]: I0930 17:10:19.350125 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:10:19 crc kubenswrapper[4821]: I0930 17:10:19.350906 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:10:19 crc kubenswrapper[4821]: I0930 17:10:19.350971 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8" gracePeriod=600 Sep 30 17:10:20 crc kubenswrapper[4821]: I0930 17:10:20.457341 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8" exitCode=0 Sep 30 17:10:20 crc kubenswrapper[4821]: I0930 17:10:20.458191 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8"} Sep 30 17:10:20 crc kubenswrapper[4821]: I0930 17:10:20.458469 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61"} Sep 30 17:10:20 crc kubenswrapper[4821]: I0930 17:10:20.458514 4821 scope.go:117] "RemoveContainer" containerID="6058994b9206cbcc9088419b7133f2833b9b513c84780f4465450f5870695096" Sep 30 17:10:27 crc kubenswrapper[4821]: I0930 17:10:27.069669 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sjl7s" Sep 30 17:10:27 crc kubenswrapper[4821]: I0930 17:10:27.160054 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.195783 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" podUID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" containerName="registry" containerID="cri-o://00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b" gracePeriod=30 Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.507374 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.630941 4821 generic.go:334] "Generic (PLEG): container finished" podID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" containerID="00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b" exitCode=0 Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.630992 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.630997 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" event={"ID":"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d","Type":"ContainerDied","Data":"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b"} Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.631436 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6l6qm" event={"ID":"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d","Type":"ContainerDied","Data":"8e50225478ecaf6dd9e7e938eabd178f08691129c002b54616e05b10e7243072"} Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.631478 4821 scope.go:117] "RemoveContainer" containerID="00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.643851 4821 scope.go:117] "RemoveContainer" containerID="00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b" Sep 30 17:10:52 crc kubenswrapper[4821]: E0930 17:10:52.644241 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b\": container with ID starting with 00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b not found: ID does not exist" containerID="00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.644284 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b"} err="failed to get container status \"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b\": rpc error: code = NotFound desc = could not find container \"00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b\": container with ID starting with 00f1a3f61c21b7ff066447f858f9978a0204e41dd62d9ea4aa2a94deae03512b not found: ID does not exist" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.669030 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.669171 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.669199 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.670016 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.670191 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.670621 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.670671 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.670738 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn9lz\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.671432 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates\") pod \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\" (UID: \"4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d\") " Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.672409 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.674231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.680209 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.682481 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.682651 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz" (OuterVolumeSpecName: "kube-api-access-jn9lz") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "kube-api-access-jn9lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.682884 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.684233 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.687546 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" (UID: "4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773806 4821 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773843 4821 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773856 4821 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773869 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773880 4821 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.773890 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn9lz\" (UniqueName: \"kubernetes.io/projected/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d-kube-api-access-jn9lz\") on node \"crc\" DevicePath \"\"" Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.944258 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:10:52 crc kubenswrapper[4821]: I0930 17:10:52.948922 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6l6qm"] Sep 30 17:10:54 crc kubenswrapper[4821]: I0930 17:10:54.713205 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" path="/var/lib/kubelet/pods/4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d/volumes" Sep 30 17:12:19 crc kubenswrapper[4821]: I0930 17:12:19.349268 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:12:19 crc kubenswrapper[4821]: I0930 17:12:19.349696 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.108162 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j9fkd"] Sep 30 17:12:47 crc kubenswrapper[4821]: E0930 17:12:47.108829 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" containerName="registry" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.108842 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" containerName="registry" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.108947 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3b7588-88d1-4ae7-bc8b-a30b14f8e23d" containerName="registry" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.109319 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.114827 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.114901 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hhc55" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.114911 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.124984 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j9fkd"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.134016 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fs727"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.134647 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fs727" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.139444 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fvhck" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.144622 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rj4sb"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.145392 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:47 crc kubenswrapper[4821]: W0930 17:12:47.148925 4821 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-mrrl6": failed to list *v1.Secret: secrets "cert-manager-webhook-dockercfg-mrrl6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Sep 30 17:12:47 crc kubenswrapper[4821]: E0930 17:12:47.148968 4821 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-mrrl6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-webhook-dockercfg-mrrl6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.152582 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fs727"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.161119 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rj4sb"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.231474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnpp\" (UniqueName: \"kubernetes.io/projected/73e8eb4d-cb66-48ae-b04b-303bb8e66a6e-kube-api-access-xqnpp\") pod \"cert-manager-5b446d88c5-fs727\" (UID: \"73e8eb4d-cb66-48ae-b04b-303bb8e66a6e\") " pod="cert-manager/cert-manager-5b446d88c5-fs727" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.231517 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qzh\" (UniqueName: \"kubernetes.io/projected/0c1c8a34-9395-4425-b589-dc71349c9cbe-kube-api-access-r9qzh\") pod \"cert-manager-webhook-5655c58dd6-rj4sb\" (UID: \"0c1c8a34-9395-4425-b589-dc71349c9cbe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.231563 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsg2d\" (UniqueName: \"kubernetes.io/projected/75333b8a-3bd4-4aed-8dec-1399b3b8d7f8-kube-api-access-hsg2d\") pod \"cert-manager-cainjector-7f985d654d-j9fkd\" (UID: \"75333b8a-3bd4-4aed-8dec-1399b3b8d7f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.332202 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqnpp\" (UniqueName: \"kubernetes.io/projected/73e8eb4d-cb66-48ae-b04b-303bb8e66a6e-kube-api-access-xqnpp\") pod \"cert-manager-5b446d88c5-fs727\" (UID: \"73e8eb4d-cb66-48ae-b04b-303bb8e66a6e\") " pod="cert-manager/cert-manager-5b446d88c5-fs727" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.332257 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9qzh\" (UniqueName: \"kubernetes.io/projected/0c1c8a34-9395-4425-b589-dc71349c9cbe-kube-api-access-r9qzh\") pod \"cert-manager-webhook-5655c58dd6-rj4sb\" (UID: \"0c1c8a34-9395-4425-b589-dc71349c9cbe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.332288 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsg2d\" (UniqueName: \"kubernetes.io/projected/75333b8a-3bd4-4aed-8dec-1399b3b8d7f8-kube-api-access-hsg2d\") pod \"cert-manager-cainjector-7f985d654d-j9fkd\" (UID: \"75333b8a-3bd4-4aed-8dec-1399b3b8d7f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.351941 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqnpp\" (UniqueName: \"kubernetes.io/projected/73e8eb4d-cb66-48ae-b04b-303bb8e66a6e-kube-api-access-xqnpp\") pod \"cert-manager-5b446d88c5-fs727\" (UID: \"73e8eb4d-cb66-48ae-b04b-303bb8e66a6e\") " pod="cert-manager/cert-manager-5b446d88c5-fs727" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.351947 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsg2d\" (UniqueName: \"kubernetes.io/projected/75333b8a-3bd4-4aed-8dec-1399b3b8d7f8-kube-api-access-hsg2d\") pod \"cert-manager-cainjector-7f985d654d-j9fkd\" (UID: \"75333b8a-3bd4-4aed-8dec-1399b3b8d7f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.356298 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9qzh\" (UniqueName: \"kubernetes.io/projected/0c1c8a34-9395-4425-b589-dc71349c9cbe-kube-api-access-r9qzh\") pod \"cert-manager-webhook-5655c58dd6-rj4sb\" (UID: \"0c1c8a34-9395-4425-b589-dc71349c9cbe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.430701 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.452809 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fs727" Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.696763 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j9fkd"] Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.708104 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:12:47 crc kubenswrapper[4821]: I0930 17:12:47.723099 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fs727"] Sep 30 17:12:47 crc kubenswrapper[4821]: W0930 17:12:47.727816 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e8eb4d_cb66_48ae_b04b_303bb8e66a6e.slice/crio-0d8fd2664b99232d7de22d898ac161484d6093ed721866e191dea12ade1b0d0d WatchSource:0}: Error finding container 0d8fd2664b99232d7de22d898ac161484d6093ed721866e191dea12ade1b0d0d: Status 404 returned error can't find the container with id 0d8fd2664b99232d7de22d898ac161484d6093ed721866e191dea12ade1b0d0d Sep 30 17:12:48 crc kubenswrapper[4821]: I0930 17:12:48.187613 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fs727" event={"ID":"73e8eb4d-cb66-48ae-b04b-303bb8e66a6e","Type":"ContainerStarted","Data":"0d8fd2664b99232d7de22d898ac161484d6093ed721866e191dea12ade1b0d0d"} Sep 30 17:12:48 crc kubenswrapper[4821]: I0930 17:12:48.188498 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" event={"ID":"75333b8a-3bd4-4aed-8dec-1399b3b8d7f8","Type":"ContainerStarted","Data":"26e0742733bb742fdd9e52a444779184f170bb91bcdb875df2d6d25e6df4b0a9"} Sep 30 17:12:48 crc kubenswrapper[4821]: I0930 17:12:48.240942 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mrrl6" Sep 30 17:12:48 crc kubenswrapper[4821]: I0930 17:12:48.249201 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:48 crc kubenswrapper[4821]: I0930 17:12:48.445260 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rj4sb"] Sep 30 17:12:48 crc kubenswrapper[4821]: W0930 17:12:48.475832 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c1c8a34_9395_4425_b589_dc71349c9cbe.slice/crio-14ef750527d100c1791f10a1c603ad0bcb179d109b4c53721d6a3224c2a4f939 WatchSource:0}: Error finding container 14ef750527d100c1791f10a1c603ad0bcb179d109b4c53721d6a3224c2a4f939: Status 404 returned error can't find the container with id 14ef750527d100c1791f10a1c603ad0bcb179d109b4c53721d6a3224c2a4f939 Sep 30 17:12:49 crc kubenswrapper[4821]: I0930 17:12:49.193603 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" event={"ID":"0c1c8a34-9395-4425-b589-dc71349c9cbe","Type":"ContainerStarted","Data":"14ef750527d100c1791f10a1c603ad0bcb179d109b4c53721d6a3224c2a4f939"} Sep 30 17:12:49 crc kubenswrapper[4821]: I0930 17:12:49.350253 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:12:49 crc kubenswrapper[4821]: I0930 17:12:49.350319 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:12:50 crc kubenswrapper[4821]: I0930 17:12:50.202925 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fs727" event={"ID":"73e8eb4d-cb66-48ae-b04b-303bb8e66a6e","Type":"ContainerStarted","Data":"e305a23fafef2e3850d2b5b3c87ac9f1216076aa0d38f27fe56a265575feb4e5"} Sep 30 17:12:50 crc kubenswrapper[4821]: I0930 17:12:50.222116 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-fs727" podStartSLOduration=1.094595691 podStartE2EDuration="3.222095184s" podCreationTimestamp="2025-09-30 17:12:47 +0000 UTC" firstStartedPulling="2025-09-30 17:12:47.731294852 +0000 UTC m=+563.636340796" lastFinishedPulling="2025-09-30 17:12:49.858794345 +0000 UTC m=+565.763840289" observedRunningTime="2025-09-30 17:12:50.221879789 +0000 UTC m=+566.126925743" watchObservedRunningTime="2025-09-30 17:12:50.222095184 +0000 UTC m=+566.127141128" Sep 30 17:12:52 crc kubenswrapper[4821]: I0930 17:12:52.212238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" event={"ID":"0c1c8a34-9395-4425-b589-dc71349c9cbe","Type":"ContainerStarted","Data":"3b1e624ec30f0f5a66da27941753b7cd76417ce43ea974ca21d5988a270639c2"} Sep 30 17:12:52 crc kubenswrapper[4821]: I0930 17:12:52.212660 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:52 crc kubenswrapper[4821]: I0930 17:12:52.213876 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" event={"ID":"75333b8a-3bd4-4aed-8dec-1399b3b8d7f8","Type":"ContainerStarted","Data":"031bd85b6cbf77c73de309dfebac7c8d443dbd986f6827c5c4bea96665fff5da"} Sep 30 17:12:52 crc kubenswrapper[4821]: I0930 17:12:52.226179 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" podStartSLOduration=2.328635467 podStartE2EDuration="5.226163615s" podCreationTimestamp="2025-09-30 17:12:47 +0000 UTC" firstStartedPulling="2025-09-30 17:12:48.47826388 +0000 UTC m=+564.383309824" lastFinishedPulling="2025-09-30 17:12:51.375792028 +0000 UTC m=+567.280837972" observedRunningTime="2025-09-30 17:12:52.224685238 +0000 UTC m=+568.129731182" watchObservedRunningTime="2025-09-30 17:12:52.226163615 +0000 UTC m=+568.131209559" Sep 30 17:12:52 crc kubenswrapper[4821]: I0930 17:12:52.239128 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-j9fkd" podStartSLOduration=1.575535646 podStartE2EDuration="5.239056158s" podCreationTimestamp="2025-09-30 17:12:47 +0000 UTC" firstStartedPulling="2025-09-30 17:12:47.707845415 +0000 UTC m=+563.612891359" lastFinishedPulling="2025-09-30 17:12:51.371365927 +0000 UTC m=+567.276411871" observedRunningTime="2025-09-30 17:12:52.237415847 +0000 UTC m=+568.142461791" watchObservedRunningTime="2025-09-30 17:12:52.239056158 +0000 UTC m=+568.144102102" Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.794684 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7m5w"] Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.795717 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-controller" containerID="cri-o://e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796167 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="sbdb" containerID="cri-o://d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796238 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="nbdb" containerID="cri-o://2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796325 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="northd" containerID="cri-o://b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796384 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796440 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-node" containerID="cri-o://8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.796494 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-acl-logging" containerID="cri-o://b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" gracePeriod=30 Sep 30 17:12:57 crc kubenswrapper[4821]: I0930 17:12:57.826017 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" containerID="cri-o://5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" gracePeriod=30 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.130515 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/3.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.132441 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovn-acl-logging/0.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.133031 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovn-controller/0.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.133414 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170051 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170149 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170162 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash" (OuterVolumeSpecName: "host-slash") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170189 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170198 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170209 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170220 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170222 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170248 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqmhn\" (UniqueName: \"kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170251 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170263 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170278 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log" (OuterVolumeSpecName: "node-log") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170284 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170298 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170313 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170329 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170346 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170360 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170387 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170424 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170464 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170506 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170529 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170555 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\" (UID: \"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca\") " Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170760 4821 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-slash\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170773 4821 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-netns\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170784 4821 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170794 4821 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-node-log\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170805 4821 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-bin\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170835 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170862 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket" (OuterVolumeSpecName: "log-socket") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170884 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.170916 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.171266 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.171300 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172447 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172501 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172521 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172607 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172791 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.172828 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.177117 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.178316 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn" (OuterVolumeSpecName: "kube-api-access-pqmhn") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "kube-api-access-pqmhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186359 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" (UID: "6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186584 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cmf7r"] Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186754 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186769 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186778 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-acl-logging" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186784 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-acl-logging" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186793 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="nbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186799 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="nbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186812 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186817 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186825 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186831 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186837 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="northd" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186843 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="northd" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186853 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-node" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186859 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-node" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186867 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kubecfg-setup" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186872 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kubecfg-setup" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186879 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186884 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186890 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186896 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186907 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="sbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186912 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="sbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.186919 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.186925 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187026 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187035 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187042 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187049 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-node" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187056 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187064 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187073 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovn-acl-logging" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187098 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="northd" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187106 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="nbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187113 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="sbdb" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187120 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.187203 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187209 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.187293 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerName="ovnkube-controller" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.188798 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.245355 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovnkube-controller/3.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.247879 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovn-acl-logging/0.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248393 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k7m5w_6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/ovn-controller/0.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248726 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248751 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248761 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248784 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248793 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248800 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" exitCode=0 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248807 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" exitCode=143 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248816 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" exitCode=143 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248829 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248913 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248931 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.248973 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249009 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249031 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249061 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249074 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249178 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249187 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249211 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249220 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249227 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249899 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249916 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249922 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249933 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249945 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249951 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249957 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249963 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249969 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249974 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249979 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249984 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249990 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249994 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250001 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250009 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250017 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250022 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250027 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250032 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250037 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250041 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250046 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250051 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250056 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250063 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" event={"ID":"6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca","Type":"ContainerDied","Data":"6c27a98f860004d05b6d45efd2de111a51e169967e8a1a8744009479a5628e2d"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250070 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250092 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250098 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250104 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250111 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250116 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250121 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250126 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250131 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.250136 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.249145 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k7m5w" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.251059 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/2.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.251553 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/1.log" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.251595 4821 generic.go:334] "Generic (PLEG): container finished" podID="c84981f2-eb86-4d0d-9322-db1b62feeac8" containerID="992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec" exitCode=2 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.251625 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerDied","Data":"992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.251653 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66"} Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.252202 4821 scope.go:117] "RemoveContainer" containerID="992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.252451 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9sjg_openshift-multus(c84981f2-eb86-4d0d-9322-db1b62feeac8)\"" pod="openshift-multus/multus-h9sjg" podUID="c84981f2-eb86-4d0d-9322-db1b62feeac8" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.252722 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-rj4sb" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.267734 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271569 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-bin\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271607 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-netd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271629 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-config\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271660 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271676 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlhgr\" (UniqueName: \"kubernetes.io/projected/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-kube-api-access-jlhgr\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271693 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-script-lib\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271710 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271725 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-kubelet\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271741 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271757 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-netns\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271772 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-node-log\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271787 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-slash\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271801 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-var-lib-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271814 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-log-socket\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271832 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-systemd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271872 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-systemd-units\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271890 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-etc-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271906 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovn-node-metrics-cert\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271927 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-ovn\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.271951 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-env-overrides\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272590 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272618 4821 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-kubelet\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272629 4821 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-systemd\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272639 4821 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272649 4821 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-run-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272658 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272668 4821 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272859 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqmhn\" (UniqueName: \"kubernetes.io/projected/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-kube-api-access-pqmhn\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272868 4821 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-log-socket\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272876 4821 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-systemd-units\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272930 4821 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-cni-netd\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272945 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272958 4821 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272969 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.272980 4821 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.290117 4821 scope.go:117] "RemoveContainer" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.311800 4821 scope.go:117] "RemoveContainer" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.316205 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7m5w"] Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.321461 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k7m5w"] Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.326314 4821 scope.go:117] "RemoveContainer" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.337626 4821 scope.go:117] "RemoveContainer" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.348881 4821 scope.go:117] "RemoveContainer" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.363445 4821 scope.go:117] "RemoveContainer" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374537 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-ovn\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374577 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-env-overrides\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374599 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-bin\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374618 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-netd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374637 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-config\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374676 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374694 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlhgr\" (UniqueName: \"kubernetes.io/projected/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-kube-api-access-jlhgr\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374710 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-script-lib\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374729 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-kubelet\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-bin\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374772 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374808 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374819 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-cni-netd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.374959 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375024 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-node-log\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375057 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-netns\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375063 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-kubelet\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375111 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-ovn-kubernetes\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375119 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-slash\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375142 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-run-netns\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-node-log\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375122 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-env-overrides\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375163 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-host-slash\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375175 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-var-lib-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375205 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-log-socket\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375237 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-systemd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375330 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-systemd-units\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375360 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-etc-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375384 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovn-node-metrics-cert\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375422 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-config\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-log-socket\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375608 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-var-lib-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375624 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovnkube-script-lib\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375676 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-systemd\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375630 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-systemd-units\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375722 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-etc-openvswitch\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.375874 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-run-ovn\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.377958 4821 scope.go:117] "RemoveContainer" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.378241 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-ovn-node-metrics-cert\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.397946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlhgr\" (UniqueName: \"kubernetes.io/projected/3bba582d-4b0d-4f17-af54-d4a5c5ce2989-kube-api-access-jlhgr\") pod \"ovnkube-node-cmf7r\" (UID: \"3bba582d-4b0d-4f17-af54-d4a5c5ce2989\") " pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.400984 4821 scope.go:117] "RemoveContainer" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.415579 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.416104 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416132 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} err="failed to get container status \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416154 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.416484 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": container with ID starting with f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437 not found: ID does not exist" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416505 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} err="failed to get container status \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": rpc error: code = NotFound desc = could not find container \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": container with ID starting with f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416519 4821 scope.go:117] "RemoveContainer" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.416771 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": container with ID starting with d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6 not found: ID does not exist" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416795 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} err="failed to get container status \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": rpc error: code = NotFound desc = could not find container \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": container with ID starting with d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.416807 4821 scope.go:117] "RemoveContainer" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.417044 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": container with ID starting with 2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655 not found: ID does not exist" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417064 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} err="failed to get container status \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": rpc error: code = NotFound desc = could not find container \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": container with ID starting with 2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417089 4821 scope.go:117] "RemoveContainer" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.417327 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": container with ID starting with b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7 not found: ID does not exist" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417348 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} err="failed to get container status \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": rpc error: code = NotFound desc = could not find container \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": container with ID starting with b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417359 4821 scope.go:117] "RemoveContainer" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.417583 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": container with ID starting with 5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909 not found: ID does not exist" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417602 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} err="failed to get container status \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": rpc error: code = NotFound desc = could not find container \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": container with ID starting with 5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417614 4821 scope.go:117] "RemoveContainer" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.417827 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": container with ID starting with 8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730 not found: ID does not exist" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417844 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} err="failed to get container status \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": rpc error: code = NotFound desc = could not find container \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": container with ID starting with 8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.417856 4821 scope.go:117] "RemoveContainer" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.418074 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": container with ID starting with b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf not found: ID does not exist" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418105 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} err="failed to get container status \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": rpc error: code = NotFound desc = could not find container \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": container with ID starting with b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418116 4821 scope.go:117] "RemoveContainer" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.418357 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": container with ID starting with e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de not found: ID does not exist" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418381 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} err="failed to get container status \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": rpc error: code = NotFound desc = could not find container \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": container with ID starting with e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418395 4821 scope.go:117] "RemoveContainer" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: E0930 17:12:58.418673 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": container with ID starting with b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499 not found: ID does not exist" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418690 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} err="failed to get container status \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": rpc error: code = NotFound desc = could not find container \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": container with ID starting with b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418702 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418926 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} err="failed to get container status \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.418941 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419159 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} err="failed to get container status \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": rpc error: code = NotFound desc = could not find container \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": container with ID starting with f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419176 4821 scope.go:117] "RemoveContainer" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419405 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} err="failed to get container status \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": rpc error: code = NotFound desc = could not find container \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": container with ID starting with d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419422 4821 scope.go:117] "RemoveContainer" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419664 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} err="failed to get container status \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": rpc error: code = NotFound desc = could not find container \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": container with ID starting with 2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419683 4821 scope.go:117] "RemoveContainer" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419881 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} err="failed to get container status \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": rpc error: code = NotFound desc = could not find container \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": container with ID starting with b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.419924 4821 scope.go:117] "RemoveContainer" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.420712 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} err="failed to get container status \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": rpc error: code = NotFound desc = could not find container \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": container with ID starting with 5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.420728 4821 scope.go:117] "RemoveContainer" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.420932 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} err="failed to get container status \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": rpc error: code = NotFound desc = could not find container \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": container with ID starting with 8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.420948 4821 scope.go:117] "RemoveContainer" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421169 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} err="failed to get container status \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": rpc error: code = NotFound desc = could not find container \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": container with ID starting with b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421188 4821 scope.go:117] "RemoveContainer" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421386 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} err="failed to get container status \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": rpc error: code = NotFound desc = could not find container \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": container with ID starting with e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421405 4821 scope.go:117] "RemoveContainer" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421648 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} err="failed to get container status \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": rpc error: code = NotFound desc = could not find container \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": container with ID starting with b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421665 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421855 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} err="failed to get container status \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.421870 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.422067 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} err="failed to get container status \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": rpc error: code = NotFound desc = could not find container \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": container with ID starting with f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.422097 4821 scope.go:117] "RemoveContainer" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.435564 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} err="failed to get container status \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": rpc error: code = NotFound desc = could not find container \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": container with ID starting with d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.435607 4821 scope.go:117] "RemoveContainer" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.442483 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} err="failed to get container status \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": rpc error: code = NotFound desc = could not find container \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": container with ID starting with 2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.442536 4821 scope.go:117] "RemoveContainer" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.442978 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} err="failed to get container status \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": rpc error: code = NotFound desc = could not find container \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": container with ID starting with b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.442995 4821 scope.go:117] "RemoveContainer" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443272 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} err="failed to get container status \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": rpc error: code = NotFound desc = could not find container \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": container with ID starting with 5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443291 4821 scope.go:117] "RemoveContainer" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443523 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} err="failed to get container status \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": rpc error: code = NotFound desc = could not find container \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": container with ID starting with 8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443563 4821 scope.go:117] "RemoveContainer" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443812 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} err="failed to get container status \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": rpc error: code = NotFound desc = could not find container \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": container with ID starting with b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.443829 4821 scope.go:117] "RemoveContainer" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.444122 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} err="failed to get container status \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": rpc error: code = NotFound desc = could not find container \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": container with ID starting with e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.444139 4821 scope.go:117] "RemoveContainer" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.447493 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} err="failed to get container status \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": rpc error: code = NotFound desc = could not find container \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": container with ID starting with b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.447524 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.447839 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} err="failed to get container status \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.447913 4821 scope.go:117] "RemoveContainer" containerID="f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448180 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437"} err="failed to get container status \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": rpc error: code = NotFound desc = could not find container \"f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437\": container with ID starting with f87650c419798aa4e552bc4de873cf3fdacf8924e926aec7c1d257aa812bb437 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448200 4821 scope.go:117] "RemoveContainer" containerID="d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448387 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6"} err="failed to get container status \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": rpc error: code = NotFound desc = could not find container \"d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6\": container with ID starting with d4c561f18a39ddcaa488cee0ba729784ddf9b9b1c842b39195b264f1bd58e5a6 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448403 4821 scope.go:117] "RemoveContainer" containerID="2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448574 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655"} err="failed to get container status \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": rpc error: code = NotFound desc = could not find container \"2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655\": container with ID starting with 2c372fe6d9a8bf67e90eb84895ed02521a23a4318e9ea4b20bb58cc6b3ef6655 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448593 4821 scope.go:117] "RemoveContainer" containerID="b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448760 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7"} err="failed to get container status \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": rpc error: code = NotFound desc = could not find container \"b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7\": container with ID starting with b4401708a9f24e32f2820fbfc4e213d7c8daf5a3881d87e7864b4e6071ecaee7 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448775 4821 scope.go:117] "RemoveContainer" containerID="5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448941 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909"} err="failed to get container status \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": rpc error: code = NotFound desc = could not find container \"5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909\": container with ID starting with 5cf4cdce8e69538c8c14b5902eabcb335c8c864bec2a5f692cb54c361f4f3909 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.448954 4821 scope.go:117] "RemoveContainer" containerID="8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449135 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730"} err="failed to get container status \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": rpc error: code = NotFound desc = could not find container \"8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730\": container with ID starting with 8c923640849eb11ff7108c2265e787651ae4b96662316f789fff1504e27b7730 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449152 4821 scope.go:117] "RemoveContainer" containerID="b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449394 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf"} err="failed to get container status \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": rpc error: code = NotFound desc = could not find container \"b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf\": container with ID starting with b35fd425e19cd731d0d0b6471f5289dc9721cd34f2933d4c16ce39a89547c8cf not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449492 4821 scope.go:117] "RemoveContainer" containerID="e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449803 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de"} err="failed to get container status \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": rpc error: code = NotFound desc = could not find container \"e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de\": container with ID starting with e430055cf1a97d5c5bba15cafd282377ed42abcb37991d9553af47a0969f28de not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.449901 4821 scope.go:117] "RemoveContainer" containerID="b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.450181 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499"} err="failed to get container status \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": rpc error: code = NotFound desc = could not find container \"b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499\": container with ID starting with b0a885cb50763834b74f2f6e954253240543954a5c112c28950860ef069b2499 not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.450206 4821 scope.go:117] "RemoveContainer" containerID="5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.450412 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b"} err="failed to get container status \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": rpc error: code = NotFound desc = could not find container \"5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b\": container with ID starting with 5b2831d8cb5a291bb2e059252eec11ba0d0064de8e7721c7ee0d03950d9e944b not found: ID does not exist" Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.506394 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:12:58 crc kubenswrapper[4821]: W0930 17:12:58.527970 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bba582d_4b0d_4f17_af54_d4a5c5ce2989.slice/crio-33c34f17d25456646125d29c6dce68b189cad76687dec182336116a518366fe1 WatchSource:0}: Error finding container 33c34f17d25456646125d29c6dce68b189cad76687dec182336116a518366fe1: Status 404 returned error can't find the container with id 33c34f17d25456646125d29c6dce68b189cad76687dec182336116a518366fe1 Sep 30 17:12:58 crc kubenswrapper[4821]: I0930 17:12:58.714668 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca" path="/var/lib/kubelet/pods/6d33bf2b-6596-4ac4-a4b0-5e4ccbf9e3ca/volumes" Sep 30 17:12:59 crc kubenswrapper[4821]: I0930 17:12:59.259233 4821 generic.go:334] "Generic (PLEG): container finished" podID="3bba582d-4b0d-4f17-af54-d4a5c5ce2989" containerID="ae2ea3c806bf94a203896d51ae7c84ffc9c6c73ef64ae159becbc58df1ac45ec" exitCode=0 Sep 30 17:12:59 crc kubenswrapper[4821]: I0930 17:12:59.259271 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerDied","Data":"ae2ea3c806bf94a203896d51ae7c84ffc9c6c73ef64ae159becbc58df1ac45ec"} Sep 30 17:12:59 crc kubenswrapper[4821]: I0930 17:12:59.259291 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"33c34f17d25456646125d29c6dce68b189cad76687dec182336116a518366fe1"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.268489 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"47a95fc1a9fdf97b741631c28778be49a492a103f9d87b6dd9e571240253e203"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.269769 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"42fea06804ce53e95c712870e67b75a614fa4bd3fdfabb161d31253f34c01af2"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.269858 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"72daec880678cda06ac713ad13ba56ecb4c4fafd2e2851d57416d0a544716a20"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.269925 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"e6b13b211c5e5e17489cabe62d2e24e37bead2042f418505e0824db0462f19c4"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.270000 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"a6ec05f8da692fae7c95f8efcfcd287a3f34519e1ed5a01d4e6c5e3541980ae8"} Sep 30 17:13:00 crc kubenswrapper[4821]: I0930 17:13:00.270125 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"13bef450abaa793e25b01857a7782d12ceb5dea59a6d604e0689303a242d3ae0"} Sep 30 17:13:02 crc kubenswrapper[4821]: I0930 17:13:02.283213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"4893d7ab2566313c01256198f7e4742460da607bd10fafafe23d5d896e038d82"} Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.303827 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" event={"ID":"3bba582d-4b0d-4f17-af54-d4a5c5ce2989","Type":"ContainerStarted","Data":"3e996cbd40ffee5ab112450b8850f23c614d3a68eb981d4401e1a233b84ffa01"} Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.304487 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.304500 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.304537 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.374177 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.374735 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:04 crc kubenswrapper[4821]: I0930 17:13:04.409845 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" podStartSLOduration=6.4098288629999995 podStartE2EDuration="6.409828863s" podCreationTimestamp="2025-09-30 17:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:13:04.335059611 +0000 UTC m=+580.240105555" watchObservedRunningTime="2025-09-30 17:13:04.409828863 +0000 UTC m=+580.314874807" Sep 30 17:13:12 crc kubenswrapper[4821]: I0930 17:13:12.706985 4821 scope.go:117] "RemoveContainer" containerID="992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec" Sep 30 17:13:12 crc kubenswrapper[4821]: E0930 17:13:12.708309 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9sjg_openshift-multus(c84981f2-eb86-4d0d-9322-db1b62feeac8)\"" pod="openshift-multus/multus-h9sjg" podUID="c84981f2-eb86-4d0d-9322-db1b62feeac8" Sep 30 17:13:19 crc kubenswrapper[4821]: I0930 17:13:19.350211 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:13:19 crc kubenswrapper[4821]: I0930 17:13:19.350545 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:13:19 crc kubenswrapper[4821]: I0930 17:13:19.350583 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:13:19 crc kubenswrapper[4821]: I0930 17:13:19.351095 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:13:19 crc kubenswrapper[4821]: I0930 17:13:19.351152 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61" gracePeriod=600 Sep 30 17:13:20 crc kubenswrapper[4821]: I0930 17:13:20.381546 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61" exitCode=0 Sep 30 17:13:20 crc kubenswrapper[4821]: I0930 17:13:20.381609 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61"} Sep 30 17:13:20 crc kubenswrapper[4821]: I0930 17:13:20.382123 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee"} Sep 30 17:13:20 crc kubenswrapper[4821]: I0930 17:13:20.382147 4821 scope.go:117] "RemoveContainer" containerID="694415fa80647cce635089cdbd596b460c91aca25e334ca866d1832662c4cfb8" Sep 30 17:13:24 crc kubenswrapper[4821]: I0930 17:13:24.857220 4821 scope.go:117] "RemoveContainer" containerID="bb09282aaacd229c66305d60e720c01a4f2ae0ffa6aadaf7e89fb3976883bb66" Sep 30 17:13:25 crc kubenswrapper[4821]: I0930 17:13:25.407774 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/2.log" Sep 30 17:13:26 crc kubenswrapper[4821]: I0930 17:13:26.707791 4821 scope.go:117] "RemoveContainer" containerID="992fb7240af9437f906ca1508151e95430f320926f2db765fd848ac767958dec" Sep 30 17:13:27 crc kubenswrapper[4821]: I0930 17:13:27.421499 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9sjg_c84981f2-eb86-4d0d-9322-db1b62feeac8/kube-multus/2.log" Sep 30 17:13:27 crc kubenswrapper[4821]: I0930 17:13:27.422324 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9sjg" event={"ID":"c84981f2-eb86-4d0d-9322-db1b62feeac8","Type":"ContainerStarted","Data":"62695241b0f93eaa2f961a857cb6e7b162facd47d8404ef5afa7d86ea064a408"} Sep 30 17:13:28 crc kubenswrapper[4821]: I0930 17:13:28.529831 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cmf7r" Sep 30 17:13:35 crc kubenswrapper[4821]: I0930 17:13:35.938589 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg"] Sep 30 17:13:35 crc kubenswrapper[4821]: I0930 17:13:35.940477 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:35 crc kubenswrapper[4821]: I0930 17:13:35.942702 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 17:13:35 crc kubenswrapper[4821]: I0930 17:13:35.951578 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg"] Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.034337 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.034391 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.034439 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk9s\" (UniqueName: \"kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.135439 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.135488 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.135521 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzk9s\" (UniqueName: \"kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.136317 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.136595 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.152592 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzk9s\" (UniqueName: \"kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.266837 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.447335 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg"] Sep 30 17:13:36 crc kubenswrapper[4821]: I0930 17:13:36.472060 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerStarted","Data":"9441c11d63d26edfc7db7a98a4fb0ac89bf6d26f13b8203476b674aac1c9777a"} Sep 30 17:13:37 crc kubenswrapper[4821]: I0930 17:13:37.478702 4821 generic.go:334] "Generic (PLEG): container finished" podID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerID="ee46609336745e10a68c307ccfbb51bfcdb65e500fd680c7ed84baa28ab7dd7a" exitCode=0 Sep 30 17:13:37 crc kubenswrapper[4821]: I0930 17:13:37.478753 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerDied","Data":"ee46609336745e10a68c307ccfbb51bfcdb65e500fd680c7ed84baa28ab7dd7a"} Sep 30 17:13:39 crc kubenswrapper[4821]: I0930 17:13:39.490592 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerStarted","Data":"7429773780e50f45ab5af20981d42dc32800bb8bfe0eb97005c0f887a5cb2314"} Sep 30 17:13:40 crc kubenswrapper[4821]: I0930 17:13:40.496416 4821 generic.go:334] "Generic (PLEG): container finished" podID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerID="7429773780e50f45ab5af20981d42dc32800bb8bfe0eb97005c0f887a5cb2314" exitCode=0 Sep 30 17:13:40 crc kubenswrapper[4821]: I0930 17:13:40.496456 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerDied","Data":"7429773780e50f45ab5af20981d42dc32800bb8bfe0eb97005c0f887a5cb2314"} Sep 30 17:13:41 crc kubenswrapper[4821]: I0930 17:13:41.506985 4821 generic.go:334] "Generic (PLEG): container finished" podID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerID="46a8d670e8971f7aad3d8e589a8238ddc70ec9e20ee341453cf390c0b41c2443" exitCode=0 Sep 30 17:13:41 crc kubenswrapper[4821]: I0930 17:13:41.507126 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerDied","Data":"46a8d670e8971f7aad3d8e589a8238ddc70ec9e20ee341453cf390c0b41c2443"} Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.783348 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.933808 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzk9s\" (UniqueName: \"kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s\") pod \"a04a670a-fd36-4b30-be56-f31c9da6f350\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.933865 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle\") pod \"a04a670a-fd36-4b30-be56-f31c9da6f350\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.933897 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util\") pod \"a04a670a-fd36-4b30-be56-f31c9da6f350\" (UID: \"a04a670a-fd36-4b30-be56-f31c9da6f350\") " Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.934527 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle" (OuterVolumeSpecName: "bundle") pod "a04a670a-fd36-4b30-be56-f31c9da6f350" (UID: "a04a670a-fd36-4b30-be56-f31c9da6f350"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.939557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s" (OuterVolumeSpecName: "kube-api-access-qzk9s") pod "a04a670a-fd36-4b30-be56-f31c9da6f350" (UID: "a04a670a-fd36-4b30-be56-f31c9da6f350"). InnerVolumeSpecName "kube-api-access-qzk9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:13:42 crc kubenswrapper[4821]: I0930 17:13:42.945591 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util" (OuterVolumeSpecName: "util") pod "a04a670a-fd36-4b30-be56-f31c9da6f350" (UID: "a04a670a-fd36-4b30-be56-f31c9da6f350"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.035477 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzk9s\" (UniqueName: \"kubernetes.io/projected/a04a670a-fd36-4b30-be56-f31c9da6f350-kube-api-access-qzk9s\") on node \"crc\" DevicePath \"\"" Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.035523 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.035535 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04a670a-fd36-4b30-be56-f31c9da6f350-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.524310 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" event={"ID":"a04a670a-fd36-4b30-be56-f31c9da6f350","Type":"ContainerDied","Data":"9441c11d63d26edfc7db7a98a4fb0ac89bf6d26f13b8203476b674aac1c9777a"} Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.524548 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9441c11d63d26edfc7db7a98a4fb0ac89bf6d26f13b8203476b674aac1c9777a" Sep 30 17:13:43 crc kubenswrapper[4821]: I0930 17:13:43.524383 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.654326 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l"] Sep 30 17:13:44 crc kubenswrapper[4821]: E0930 17:13:44.655610 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="extract" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.655704 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="extract" Sep 30 17:13:44 crc kubenswrapper[4821]: E0930 17:13:44.655792 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="pull" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.655874 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="pull" Sep 30 17:13:44 crc kubenswrapper[4821]: E0930 17:13:44.655962 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="util" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.656047 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="util" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.656246 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04a670a-fd36-4b30-be56-f31c9da6f350" containerName="extract" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.656695 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.659466 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.659604 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.659679 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-k9lhv" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.667544 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l"] Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.756044 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8n5\" (UniqueName: \"kubernetes.io/projected/5fa2210d-5050-4669-91fb-2fcb41e8bb1c-kube-api-access-6d8n5\") pod \"nmstate-operator-5d6f6cfd66-fl26l\" (UID: \"5fa2210d-5050-4669-91fb-2fcb41e8bb1c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.857696 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8n5\" (UniqueName: \"kubernetes.io/projected/5fa2210d-5050-4669-91fb-2fcb41e8bb1c-kube-api-access-6d8n5\") pod \"nmstate-operator-5d6f6cfd66-fl26l\" (UID: \"5fa2210d-5050-4669-91fb-2fcb41e8bb1c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.874129 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8n5\" (UniqueName: \"kubernetes.io/projected/5fa2210d-5050-4669-91fb-2fcb41e8bb1c-kube-api-access-6d8n5\") pod \"nmstate-operator-5d6f6cfd66-fl26l\" (UID: \"5fa2210d-5050-4669-91fb-2fcb41e8bb1c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" Sep 30 17:13:44 crc kubenswrapper[4821]: I0930 17:13:44.976642 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" Sep 30 17:13:45 crc kubenswrapper[4821]: I0930 17:13:45.159809 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l"] Sep 30 17:13:45 crc kubenswrapper[4821]: W0930 17:13:45.165277 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa2210d_5050_4669_91fb_2fcb41e8bb1c.slice/crio-70b31c2eb9c782d7a75bc7c5195f5f3b40676311aa3794bccae561f8526645a1 WatchSource:0}: Error finding container 70b31c2eb9c782d7a75bc7c5195f5f3b40676311aa3794bccae561f8526645a1: Status 404 returned error can't find the container with id 70b31c2eb9c782d7a75bc7c5195f5f3b40676311aa3794bccae561f8526645a1 Sep 30 17:13:45 crc kubenswrapper[4821]: I0930 17:13:45.533682 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" event={"ID":"5fa2210d-5050-4669-91fb-2fcb41e8bb1c","Type":"ContainerStarted","Data":"70b31c2eb9c782d7a75bc7c5195f5f3b40676311aa3794bccae561f8526645a1"} Sep 30 17:13:48 crc kubenswrapper[4821]: I0930 17:13:48.554509 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" event={"ID":"5fa2210d-5050-4669-91fb-2fcb41e8bb1c","Type":"ContainerStarted","Data":"8aeee4f4e705c378aff1a98088ea84f81a8b67a6946f5433d7a4f947a6da71fe"} Sep 30 17:13:48 crc kubenswrapper[4821]: I0930 17:13:48.568680 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-fl26l" podStartSLOduration=2.046328162 podStartE2EDuration="4.568664251s" podCreationTimestamp="2025-09-30 17:13:44 +0000 UTC" firstStartedPulling="2025-09-30 17:13:45.166872448 +0000 UTC m=+621.071918392" lastFinishedPulling="2025-09-30 17:13:47.689208537 +0000 UTC m=+623.594254481" observedRunningTime="2025-09-30 17:13:48.566638589 +0000 UTC m=+624.471684543" watchObservedRunningTime="2025-09-30 17:13:48.568664251 +0000 UTC m=+624.473710195" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.480184 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.481240 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.485716 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jtz78" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.503737 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.504530 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.507696 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.510265 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.520793 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.530527 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7vzn5"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.531239 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.614884 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7nr\" (UniqueName: \"kubernetes.io/projected/58c46502-d375-4f8d-80fb-e43798a3d459-kube-api-access-sl7nr\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.614938 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-dbus-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.614982 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/843eeb31-9be1-4632-a58a-0bbe45efa603-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.615043 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-nmstate-lock\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.615066 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx46k\" (UniqueName: \"kubernetes.io/projected/843eeb31-9be1-4632-a58a-0bbe45efa603-kube-api-access-rx46k\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.615101 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-ovs-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.615152 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sp8v\" (UniqueName: \"kubernetes.io/projected/a435cb08-e538-4898-845f-cb093a28d190-kube-api-access-8sp8v\") pod \"nmstate-metrics-58fcddf996-xlwkz\" (UID: \"a435cb08-e538-4898-845f-cb093a28d190\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.634181 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.634954 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.645638 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pqs86" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.645723 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.645823 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.649852 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716072 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sp8v\" (UniqueName: \"kubernetes.io/projected/a435cb08-e538-4898-845f-cb093a28d190-kube-api-access-8sp8v\") pod \"nmstate-metrics-58fcddf996-xlwkz\" (UID: \"a435cb08-e538-4898-845f-cb093a28d190\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716159 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7nr\" (UniqueName: \"kubernetes.io/projected/58c46502-d375-4f8d-80fb-e43798a3d459-kube-api-access-sl7nr\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716188 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-dbus-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716220 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/838dc90c-5925-4cc0-9f35-2a4efc53adc9-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716246 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/843eeb31-9be1-4632-a58a-0bbe45efa603-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716272 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/838dc90c-5925-4cc0-9f35-2a4efc53adc9-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716303 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mnh\" (UniqueName: \"kubernetes.io/projected/838dc90c-5925-4cc0-9f35-2a4efc53adc9-kube-api-access-c5mnh\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716332 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-nmstate-lock\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716365 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-ovs-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716388 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx46k\" (UniqueName: \"kubernetes.io/projected/843eeb31-9be1-4632-a58a-0bbe45efa603-kube-api-access-rx46k\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716451 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-nmstate-lock\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716510 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-ovs-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.716527 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58c46502-d375-4f8d-80fb-e43798a3d459-dbus-socket\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.736559 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sp8v\" (UniqueName: \"kubernetes.io/projected/a435cb08-e538-4898-845f-cb093a28d190-kube-api-access-8sp8v\") pod \"nmstate-metrics-58fcddf996-xlwkz\" (UID: \"a435cb08-e538-4898-845f-cb093a28d190\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.737801 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/843eeb31-9be1-4632-a58a-0bbe45efa603-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.740822 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx46k\" (UniqueName: \"kubernetes.io/projected/843eeb31-9be1-4632-a58a-0bbe45efa603-kube-api-access-rx46k\") pod \"nmstate-webhook-6d689559c5-7sb9q\" (UID: \"843eeb31-9be1-4632-a58a-0bbe45efa603\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.743884 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7nr\" (UniqueName: \"kubernetes.io/projected/58c46502-d375-4f8d-80fb-e43798a3d459-kube-api-access-sl7nr\") pod \"nmstate-handler-7vzn5\" (UID: \"58c46502-d375-4f8d-80fb-e43798a3d459\") " pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.795329 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.827650 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.828266 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/838dc90c-5925-4cc0-9f35-2a4efc53adc9-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.830412 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/838dc90c-5925-4cc0-9f35-2a4efc53adc9-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.830953 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/838dc90c-5925-4cc0-9f35-2a4efc53adc9-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.831958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mnh\" (UniqueName: \"kubernetes.io/projected/838dc90c-5925-4cc0-9f35-2a4efc53adc9-kube-api-access-c5mnh\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.846430 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.853507 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-744679cbdb-jcgcz"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.855984 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/838dc90c-5925-4cc0-9f35-2a4efc53adc9-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.864476 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.878086 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mnh\" (UniqueName: \"kubernetes.io/projected/838dc90c-5925-4cc0-9f35-2a4efc53adc9-kube-api-access-c5mnh\") pod \"nmstate-console-plugin-864bb6dfb5-s6qlq\" (UID: \"838dc90c-5925-4cc0-9f35-2a4efc53adc9\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.888439 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744679cbdb-jcgcz"] Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.939829 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940170 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-service-ca\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940222 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940252 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c9k\" (UniqueName: \"kubernetes.io/projected/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-kube-api-access-j5c9k\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940284 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-oauth-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940347 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-trusted-ca-bundle\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.940401 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-oauth-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:49 crc kubenswrapper[4821]: I0930 17:13:49.961167 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.042470 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.042634 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-service-ca\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.042767 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.042803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5c9k\" (UniqueName: \"kubernetes.io/projected/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-kube-api-access-j5c9k\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.042839 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-oauth-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.044852 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-trusted-ca-bundle\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.044919 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-service-ca\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.044938 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-oauth-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.045556 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.045724 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-oauth-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.047013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-trusted-ca-bundle\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.050601 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-serving-cert\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.050810 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-console-oauth-config\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.064217 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5c9k\" (UniqueName: \"kubernetes.io/projected/f8dafb5c-fc9c-4058-ada1-2f8ed2b95529-kube-api-access-j5c9k\") pod \"console-744679cbdb-jcgcz\" (UID: \"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529\") " pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.096886 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz"] Sep 30 17:13:50 crc kubenswrapper[4821]: W0930 17:13:50.111074 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda435cb08_e538_4898_845f_cb093a28d190.slice/crio-6505d91b550b49f6390facac2ebe43d0fa0768bf5a832f7dee4d15a088fcbf1c WatchSource:0}: Error finding container 6505d91b550b49f6390facac2ebe43d0fa0768bf5a832f7dee4d15a088fcbf1c: Status 404 returned error can't find the container with id 6505d91b550b49f6390facac2ebe43d0fa0768bf5a832f7dee4d15a088fcbf1c Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.179329 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q"] Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.198414 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.243933 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq"] Sep 30 17:13:50 crc kubenswrapper[4821]: W0930 17:13:50.253360 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod838dc90c_5925_4cc0_9f35_2a4efc53adc9.slice/crio-cecbd3af57196ab60b3be09230d535609142447ca1c0ed2c56f35decbacd9c0c WatchSource:0}: Error finding container cecbd3af57196ab60b3be09230d535609142447ca1c0ed2c56f35decbacd9c0c: Status 404 returned error can't find the container with id cecbd3af57196ab60b3be09230d535609142447ca1c0ed2c56f35decbacd9c0c Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.391405 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744679cbdb-jcgcz"] Sep 30 17:13:50 crc kubenswrapper[4821]: W0930 17:13:50.397963 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8dafb5c_fc9c_4058_ada1_2f8ed2b95529.slice/crio-3669fca05ee70e865a6b435c4b8385d018c908230b9968b7e96abe47d550a110 WatchSource:0}: Error finding container 3669fca05ee70e865a6b435c4b8385d018c908230b9968b7e96abe47d550a110: Status 404 returned error can't find the container with id 3669fca05ee70e865a6b435c4b8385d018c908230b9968b7e96abe47d550a110 Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.571885 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" event={"ID":"a435cb08-e538-4898-845f-cb093a28d190","Type":"ContainerStarted","Data":"6505d91b550b49f6390facac2ebe43d0fa0768bf5a832f7dee4d15a088fcbf1c"} Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.573748 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" event={"ID":"838dc90c-5925-4cc0-9f35-2a4efc53adc9","Type":"ContainerStarted","Data":"cecbd3af57196ab60b3be09230d535609142447ca1c0ed2c56f35decbacd9c0c"} Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.574720 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7vzn5" event={"ID":"58c46502-d375-4f8d-80fb-e43798a3d459","Type":"ContainerStarted","Data":"48195aee2f503c504d7507f5398b87f5cba697823cb38bce7ed3a7eadb954494"} Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.576050 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" event={"ID":"843eeb31-9be1-4632-a58a-0bbe45efa603","Type":"ContainerStarted","Data":"36777c085470faf069039adcc94bc669465261819d4b21cb258c70fd9972f011"} Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.577228 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744679cbdb-jcgcz" event={"ID":"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529","Type":"ContainerStarted","Data":"2fca4a8cc018004ada51b322eb609d51dacfd3591b5e2539514d3410414c7b3f"} Sep 30 17:13:50 crc kubenswrapper[4821]: I0930 17:13:50.577253 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744679cbdb-jcgcz" event={"ID":"f8dafb5c-fc9c-4058-ada1-2f8ed2b95529","Type":"ContainerStarted","Data":"3669fca05ee70e865a6b435c4b8385d018c908230b9968b7e96abe47d550a110"} Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.594366 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" event={"ID":"838dc90c-5925-4cc0-9f35-2a4efc53adc9","Type":"ContainerStarted","Data":"ba343abb616944a3fe5d13ff1934894b46f5829e3a461e84982a0b1ca8cb2070"} Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.596509 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7vzn5" event={"ID":"58c46502-d375-4f8d-80fb-e43798a3d459","Type":"ContainerStarted","Data":"f0b6e1e0172ac4f872c3af692c3f32b9eeb5727e0ae349142d09636a44ffab76"} Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.596929 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.598359 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" event={"ID":"843eeb31-9be1-4632-a58a-0bbe45efa603","Type":"ContainerStarted","Data":"ff68f34be9377e737cc8da6eac8f5b763f6ce96497297aa35b74ef9109c47398"} Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.598760 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.600723 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" event={"ID":"a435cb08-e538-4898-845f-cb093a28d190","Type":"ContainerStarted","Data":"cf2e62ef8f3c704ea2b73c8c32f6027c38617239f3824f59988c2af1b36c43c6"} Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.610641 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-s6qlq" podStartSLOduration=1.569150981 podStartE2EDuration="4.610629499s" podCreationTimestamp="2025-09-30 17:13:49 +0000 UTC" firstStartedPulling="2025-09-30 17:13:50.256489419 +0000 UTC m=+626.161535363" lastFinishedPulling="2025-09-30 17:13:53.297967937 +0000 UTC m=+629.203013881" observedRunningTime="2025-09-30 17:13:53.609368648 +0000 UTC m=+629.514414592" watchObservedRunningTime="2025-09-30 17:13:53.610629499 +0000 UTC m=+629.515675443" Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.610922 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-744679cbdb-jcgcz" podStartSLOduration=4.610917506 podStartE2EDuration="4.610917506s" podCreationTimestamp="2025-09-30 17:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:13:50.592764252 +0000 UTC m=+626.497810206" watchObservedRunningTime="2025-09-30 17:13:53.610917506 +0000 UTC m=+629.515963460" Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.651184 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7vzn5" podStartSLOduration=1.2619156440000001 podStartE2EDuration="4.651165553s" podCreationTimestamp="2025-09-30 17:13:49 +0000 UTC" firstStartedPulling="2025-09-30 17:13:49.925546179 +0000 UTC m=+625.830592123" lastFinishedPulling="2025-09-30 17:13:53.314796088 +0000 UTC m=+629.219842032" observedRunningTime="2025-09-30 17:13:53.641747828 +0000 UTC m=+629.546793772" watchObservedRunningTime="2025-09-30 17:13:53.651165553 +0000 UTC m=+629.556211497" Sep 30 17:13:53 crc kubenswrapper[4821]: I0930 17:13:53.661298 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" podStartSLOduration=1.54269696 podStartE2EDuration="4.661281116s" podCreationTimestamp="2025-09-30 17:13:49 +0000 UTC" firstStartedPulling="2025-09-30 17:13:50.184232612 +0000 UTC m=+626.089278546" lastFinishedPulling="2025-09-30 17:13:53.302816758 +0000 UTC m=+629.207862702" observedRunningTime="2025-09-30 17:13:53.656904687 +0000 UTC m=+629.561950631" watchObservedRunningTime="2025-09-30 17:13:53.661281116 +0000 UTC m=+629.566327060" Sep 30 17:13:56 crc kubenswrapper[4821]: I0930 17:13:56.622362 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" event={"ID":"a435cb08-e538-4898-845f-cb093a28d190","Type":"ContainerStarted","Data":"253e0d9b011a7bb4f564f17abd2215d6d11708702b8df5fa064237ed09977bbb"} Sep 30 17:13:56 crc kubenswrapper[4821]: I0930 17:13:56.637841 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xlwkz" podStartSLOduration=2.220302462 podStartE2EDuration="7.637814798s" podCreationTimestamp="2025-09-30 17:13:49 +0000 UTC" firstStartedPulling="2025-09-30 17:13:50.119529532 +0000 UTC m=+626.024575476" lastFinishedPulling="2025-09-30 17:13:55.537041868 +0000 UTC m=+631.442087812" observedRunningTime="2025-09-30 17:13:56.635645194 +0000 UTC m=+632.540691148" watchObservedRunningTime="2025-09-30 17:13:56.637814798 +0000 UTC m=+632.542860752" Sep 30 17:13:59 crc kubenswrapper[4821]: I0930 17:13:59.866543 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7vzn5" Sep 30 17:14:00 crc kubenswrapper[4821]: I0930 17:14:00.199406 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:14:00 crc kubenswrapper[4821]: I0930 17:14:00.199623 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:14:00 crc kubenswrapper[4821]: I0930 17:14:00.203781 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:14:00 crc kubenswrapper[4821]: I0930 17:14:00.653652 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-744679cbdb-jcgcz" Sep 30 17:14:00 crc kubenswrapper[4821]: I0930 17:14:00.715277 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:14:09 crc kubenswrapper[4821]: I0930 17:14:09.833588 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6d689559c5-7sb9q" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.187991 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg"] Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.189466 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.192043 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.201527 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg"] Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.269567 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.269655 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.269686 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchk7\" (UniqueName: \"kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.370788 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.370840 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchk7\" (UniqueName: \"kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.370877 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.371293 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.371320 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.389416 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchk7\" (UniqueName: \"kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.508686 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:22 crc kubenswrapper[4821]: I0930 17:14:22.889103 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg"] Sep 30 17:14:23 crc kubenswrapper[4821]: I0930 17:14:23.774815 4821 generic.go:334] "Generic (PLEG): container finished" podID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerID="6fdacc18dc5d25430c0559a73bcf311088be0668001819507e5af428745113e7" exitCode=0 Sep 30 17:14:23 crc kubenswrapper[4821]: I0930 17:14:23.774879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" event={"ID":"3cf49a14-2605-4bfb-9dce-04b1438b107c","Type":"ContainerDied","Data":"6fdacc18dc5d25430c0559a73bcf311088be0668001819507e5af428745113e7"} Sep 30 17:14:23 crc kubenswrapper[4821]: I0930 17:14:23.775044 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" event={"ID":"3cf49a14-2605-4bfb-9dce-04b1438b107c","Type":"ContainerStarted","Data":"0354ccdc5a242d27e70d388ab37c3f3ddb9e5f4fc6feac23c2fee0bc2a3c6874"} Sep 30 17:14:25 crc kubenswrapper[4821]: I0930 17:14:25.763944 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lzvgr" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" containerID="cri-o://78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56" gracePeriod=15 Sep 30 17:14:25 crc kubenswrapper[4821]: I0930 17:14:25.788559 4821 generic.go:334] "Generic (PLEG): container finished" podID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerID="c4771b9ac6f446c2b33fcf5b8ade63bc17153a4740e1d92447b45ec3624026d0" exitCode=0 Sep 30 17:14:25 crc kubenswrapper[4821]: I0930 17:14:25.788602 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" event={"ID":"3cf49a14-2605-4bfb-9dce-04b1438b107c","Type":"ContainerDied","Data":"c4771b9ac6f446c2b33fcf5b8ade63bc17153a4740e1d92447b45ec3624026d0"} Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.096591 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lzvgr_08d6cb47-472a-4bda-bfc0-738029e84e40/console/0.log" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.097008 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216003 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216069 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjk4p\" (UniqueName: \"kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216116 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216145 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216200 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216230 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.216249 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert\") pod \"08d6cb47-472a-4bda-bfc0-738029e84e40\" (UID: \"08d6cb47-472a-4bda-bfc0-738029e84e40\") " Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.217182 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.217195 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca" (OuterVolumeSpecName: "service-ca") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.217173 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.217213 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config" (OuterVolumeSpecName: "console-config") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.221983 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.222060 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p" (OuterVolumeSpecName: "kube-api-access-vjk4p") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "kube-api-access-vjk4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.222473 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "08d6cb47-472a-4bda-bfc0-738029e84e40" (UID: "08d6cb47-472a-4bda-bfc0-738029e84e40"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.318391 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.318701 4821 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.318810 4821 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.318895 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.318979 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjk4p\" (UniqueName: \"kubernetes.io/projected/08d6cb47-472a-4bda-bfc0-738029e84e40-kube-api-access-vjk4p\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.319168 4821 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d6cb47-472a-4bda-bfc0-738029e84e40-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.319281 4821 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d6cb47-472a-4bda-bfc0-738029e84e40-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.796681 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lzvgr_08d6cb47-472a-4bda-bfc0-738029e84e40/console/0.log" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.797310 4821 generic.go:334] "Generic (PLEG): container finished" podID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerID="78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56" exitCode=2 Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.797619 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lzvgr" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.797499 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lzvgr" event={"ID":"08d6cb47-472a-4bda-bfc0-738029e84e40","Type":"ContainerDied","Data":"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56"} Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.797832 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lzvgr" event={"ID":"08d6cb47-472a-4bda-bfc0-738029e84e40","Type":"ContainerDied","Data":"c1ce6d77d61b7eb78bac2195584ecc11d9d07d017a73197ef559965aa7abc73e"} Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.797932 4821 scope.go:117] "RemoveContainer" containerID="78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.802339 4821 generic.go:334] "Generic (PLEG): container finished" podID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerID="1f50e160d6313721469a273467a3ee0a4e98a5c0f896e276df4089a500a8de61" exitCode=0 Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.802395 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" event={"ID":"3cf49a14-2605-4bfb-9dce-04b1438b107c","Type":"ContainerDied","Data":"1f50e160d6313721469a273467a3ee0a4e98a5c0f896e276df4089a500a8de61"} Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.827671 4821 scope.go:117] "RemoveContainer" containerID="78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56" Sep 30 17:14:26 crc kubenswrapper[4821]: E0930 17:14:26.828429 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56\": container with ID starting with 78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56 not found: ID does not exist" containerID="78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.828663 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56"} err="failed to get container status \"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56\": rpc error: code = NotFound desc = could not find container \"78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56\": container with ID starting with 78c3ada235a9536e0a0e413c1a226c773f8660598a929b9215aa4e806a9ccd56 not found: ID does not exist" Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.850538 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:14:26 crc kubenswrapper[4821]: I0930 17:14:26.855659 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lzvgr"] Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.009578 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.047989 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gchk7\" (UniqueName: \"kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7\") pod \"3cf49a14-2605-4bfb-9dce-04b1438b107c\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.048081 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle\") pod \"3cf49a14-2605-4bfb-9dce-04b1438b107c\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.048160 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util\") pod \"3cf49a14-2605-4bfb-9dce-04b1438b107c\" (UID: \"3cf49a14-2605-4bfb-9dce-04b1438b107c\") " Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.049217 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle" (OuterVolumeSpecName: "bundle") pod "3cf49a14-2605-4bfb-9dce-04b1438b107c" (UID: "3cf49a14-2605-4bfb-9dce-04b1438b107c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.053518 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7" (OuterVolumeSpecName: "kube-api-access-gchk7") pod "3cf49a14-2605-4bfb-9dce-04b1438b107c" (UID: "3cf49a14-2605-4bfb-9dce-04b1438b107c"). InnerVolumeSpecName "kube-api-access-gchk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.063050 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util" (OuterVolumeSpecName: "util") pod "3cf49a14-2605-4bfb-9dce-04b1438b107c" (UID: "3cf49a14-2605-4bfb-9dce-04b1438b107c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.149876 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.149909 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gchk7\" (UniqueName: \"kubernetes.io/projected/3cf49a14-2605-4bfb-9dce-04b1438b107c-kube-api-access-gchk7\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.149921 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf49a14-2605-4bfb-9dce-04b1438b107c-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.713673 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" path="/var/lib/kubelet/pods/08d6cb47-472a-4bda-bfc0-738029e84e40/volumes" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.814324 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" event={"ID":"3cf49a14-2605-4bfb-9dce-04b1438b107c","Type":"ContainerDied","Data":"0354ccdc5a242d27e70d388ab37c3f3ddb9e5f4fc6feac23c2fee0bc2a3c6874"} Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.814360 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0354ccdc5a242d27e70d388ab37c3f3ddb9e5f4fc6feac23c2fee0bc2a3c6874" Sep 30 17:14:28 crc kubenswrapper[4821]: I0930 17:14:28.814364 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.294412 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl"] Sep 30 17:14:37 crc kubenswrapper[4821]: E0930 17:14:37.296025 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296152 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" Sep 30 17:14:37 crc kubenswrapper[4821]: E0930 17:14:37.296237 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="util" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296310 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="util" Sep 30 17:14:37 crc kubenswrapper[4821]: E0930 17:14:37.296384 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="extract" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296451 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="extract" Sep 30 17:14:37 crc kubenswrapper[4821]: E0930 17:14:37.296525 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="pull" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296598 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="pull" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296826 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d6cb47-472a-4bda-bfc0-738029e84e40" containerName="console" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.296919 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf49a14-2605-4bfb-9dce-04b1438b107c" containerName="extract" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.297513 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.299796 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.300137 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.300620 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.300797 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.300903 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tsxbf" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.313537 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl"] Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.410360 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fvb\" (UniqueName: \"kubernetes.io/projected/c823710e-442d-4956-aaff-8822ff222043-kube-api-access-c7fvb\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.410422 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-apiservice-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.410529 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-webhook-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.511701 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-apiservice-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.511802 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-webhook-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.511866 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fvb\" (UniqueName: \"kubernetes.io/projected/c823710e-442d-4956-aaff-8822ff222043-kube-api-access-c7fvb\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.521075 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-apiservice-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.531989 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c823710e-442d-4956-aaff-8822ff222043-webhook-cert\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.547160 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fvb\" (UniqueName: \"kubernetes.io/projected/c823710e-442d-4956-aaff-8822ff222043-kube-api-access-c7fvb\") pod \"metallb-operator-controller-manager-56f8dc8465-bxgfl\" (UID: \"c823710e-442d-4956-aaff-8822ff222043\") " pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.559202 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr"] Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.559998 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.573372 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.576573 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k6f89" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.582380 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.588629 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr"] Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.624374 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.719332 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-webhook-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.719368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-apiservice-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.719405 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9sw\" (UniqueName: \"kubernetes.io/projected/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-kube-api-access-4n9sw\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.820674 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9sw\" (UniqueName: \"kubernetes.io/projected/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-kube-api-access-4n9sw\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.820984 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-webhook-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.821005 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-apiservice-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.835654 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-webhook-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.838786 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-apiservice-cert\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.842244 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9sw\" (UniqueName: \"kubernetes.io/projected/7b152dda-dec3-4365-bc77-cb8e52ca5cb0-kube-api-access-4n9sw\") pod \"metallb-operator-webhook-server-c79f4dfd9-xvwvr\" (UID: \"7b152dda-dec3-4365-bc77-cb8e52ca5cb0\") " pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.887272 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:37 crc kubenswrapper[4821]: I0930 17:14:37.958007 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl"] Sep 30 17:14:38 crc kubenswrapper[4821]: I0930 17:14:38.279386 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr"] Sep 30 17:14:38 crc kubenswrapper[4821]: W0930 17:14:38.288206 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b152dda_dec3_4365_bc77_cb8e52ca5cb0.slice/crio-6faef9b0e52f9c9c075b6cd65470ade1ce54ed9912808cf2c5e023dbd7caea0c WatchSource:0}: Error finding container 6faef9b0e52f9c9c075b6cd65470ade1ce54ed9912808cf2c5e023dbd7caea0c: Status 404 returned error can't find the container with id 6faef9b0e52f9c9c075b6cd65470ade1ce54ed9912808cf2c5e023dbd7caea0c Sep 30 17:14:38 crc kubenswrapper[4821]: I0930 17:14:38.873495 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" event={"ID":"c823710e-442d-4956-aaff-8822ff222043","Type":"ContainerStarted","Data":"3d31917f5d2aff5d1f31d4c6bf10405a8cbc304862190925d4638795ffca8377"} Sep 30 17:14:38 crc kubenswrapper[4821]: I0930 17:14:38.875967 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" event={"ID":"7b152dda-dec3-4365-bc77-cb8e52ca5cb0","Type":"ContainerStarted","Data":"6faef9b0e52f9c9c075b6cd65470ade1ce54ed9912808cf2c5e023dbd7caea0c"} Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.903199 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" event={"ID":"c823710e-442d-4956-aaff-8822ff222043","Type":"ContainerStarted","Data":"be667c3f4df37e63ee3bcf4d9510af97148f5082b9ffdda242255f216e308ec4"} Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.903742 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.905107 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" event={"ID":"7b152dda-dec3-4365-bc77-cb8e52ca5cb0","Type":"ContainerStarted","Data":"48bce68ced63f96679f0cf5f5f1bbc3658493c9e6e79298b3883b1b0645dc506"} Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.905276 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.924786 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" podStartSLOduration=1.931089417 podStartE2EDuration="6.924771954s" podCreationTimestamp="2025-09-30 17:14:37 +0000 UTC" firstStartedPulling="2025-09-30 17:14:37.97404589 +0000 UTC m=+673.879091834" lastFinishedPulling="2025-09-30 17:14:42.967728427 +0000 UTC m=+678.872774371" observedRunningTime="2025-09-30 17:14:43.923475972 +0000 UTC m=+679.828521926" watchObservedRunningTime="2025-09-30 17:14:43.924771954 +0000 UTC m=+679.829817898" Sep 30 17:14:43 crc kubenswrapper[4821]: I0930 17:14:43.946955 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" podStartSLOduration=2.252466795 podStartE2EDuration="6.946940399s" podCreationTimestamp="2025-09-30 17:14:37 +0000 UTC" firstStartedPulling="2025-09-30 17:14:38.291414687 +0000 UTC m=+674.196460631" lastFinishedPulling="2025-09-30 17:14:42.985888291 +0000 UTC m=+678.890934235" observedRunningTime="2025-09-30 17:14:43.943361749 +0000 UTC m=+679.848407693" watchObservedRunningTime="2025-09-30 17:14:43.946940399 +0000 UTC m=+679.851986343" Sep 30 17:14:57 crc kubenswrapper[4821]: I0930 17:14:57.894493 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c79f4dfd9-xvwvr" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.130932 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg"] Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.132116 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.134465 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.137473 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.138665 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg"] Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.196355 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.196398 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.196427 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5mx\" (UniqueName: \"kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.297619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.297672 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.297709 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm5mx\" (UniqueName: \"kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.298571 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.303516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.318899 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm5mx\" (UniqueName: \"kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx\") pod \"collect-profiles-29320875-tbftg\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.458827 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.937656 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg"] Sep 30 17:15:00 crc kubenswrapper[4821]: I0930 17:15:00.988119 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" event={"ID":"ba65db8d-223d-4446-934b-17665d7fd9ad","Type":"ContainerStarted","Data":"8fe94acf3f33f2e68021a81c77b7dc8bf2215b6cf85ab496805613a97f3754ce"} Sep 30 17:15:01 crc kubenswrapper[4821]: I0930 17:15:01.994793 4821 generic.go:334] "Generic (PLEG): container finished" podID="ba65db8d-223d-4446-934b-17665d7fd9ad" containerID="bfa4a9d847997ea91babbb8c17137654c3e4bba8450064b0d424e0658358b57e" exitCode=0 Sep 30 17:15:01 crc kubenswrapper[4821]: I0930 17:15:01.994845 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" event={"ID":"ba65db8d-223d-4446-934b-17665d7fd9ad","Type":"ContainerDied","Data":"bfa4a9d847997ea91babbb8c17137654c3e4bba8450064b0d424e0658358b57e"} Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.309135 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.436429 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume\") pod \"ba65db8d-223d-4446-934b-17665d7fd9ad\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.436492 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm5mx\" (UniqueName: \"kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx\") pod \"ba65db8d-223d-4446-934b-17665d7fd9ad\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.436575 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume\") pod \"ba65db8d-223d-4446-934b-17665d7fd9ad\" (UID: \"ba65db8d-223d-4446-934b-17665d7fd9ad\") " Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.437362 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "ba65db8d-223d-4446-934b-17665d7fd9ad" (UID: "ba65db8d-223d-4446-934b-17665d7fd9ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.441485 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx" (OuterVolumeSpecName: "kube-api-access-pm5mx") pod "ba65db8d-223d-4446-934b-17665d7fd9ad" (UID: "ba65db8d-223d-4446-934b-17665d7fd9ad"). InnerVolumeSpecName "kube-api-access-pm5mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.442731 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ba65db8d-223d-4446-934b-17665d7fd9ad" (UID: "ba65db8d-223d-4446-934b-17665d7fd9ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.537494 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba65db8d-223d-4446-934b-17665d7fd9ad-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.537775 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm5mx\" (UniqueName: \"kubernetes.io/projected/ba65db8d-223d-4446-934b-17665d7fd9ad-kube-api-access-pm5mx\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:03 crc kubenswrapper[4821]: I0930 17:15:03.537786 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba65db8d-223d-4446-934b-17665d7fd9ad-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:04 crc kubenswrapper[4821]: I0930 17:15:04.005493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" event={"ID":"ba65db8d-223d-4446-934b-17665d7fd9ad","Type":"ContainerDied","Data":"8fe94acf3f33f2e68021a81c77b7dc8bf2215b6cf85ab496805613a97f3754ce"} Sep 30 17:15:04 crc kubenswrapper[4821]: I0930 17:15:04.005537 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe94acf3f33f2e68021a81c77b7dc8bf2215b6cf85ab496805613a97f3754ce" Sep 30 17:15:04 crc kubenswrapper[4821]: I0930 17:15:04.005596 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-tbftg" Sep 30 17:15:17 crc kubenswrapper[4821]: I0930 17:15:17.627184 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56f8dc8465-bxgfl" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.291916 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh"] Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.292375 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba65db8d-223d-4446-934b-17665d7fd9ad" containerName="collect-profiles" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.292400 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba65db8d-223d-4446-934b-17665d7fd9ad" containerName="collect-profiles" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.292585 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba65db8d-223d-4446-934b-17665d7fd9ad" containerName="collect-profiles" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.293060 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.296182 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.296529 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-clhsh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.296719 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xnl56"] Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.302712 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.305400 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.305650 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.312111 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh"] Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327399 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327489 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327526 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5g6k\" (UniqueName: \"kubernetes.io/projected/9deaad26-049a-4380-99c4-8d34358367af-kube-api-access-j5g6k\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327554 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d38ffd03-48b8-4684-aff0-089081da1320-frr-startup\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327582 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhjx\" (UniqueName: \"kubernetes.io/projected/d38ffd03-48b8-4684-aff0-089081da1320-kube-api-access-ghhjx\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327608 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-reloader\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327639 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-conf\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327661 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-sockets\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.327703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-metrics\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.395375 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-df78k"] Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.396645 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.399053 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.399477 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.399720 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.399919 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-z7qzp" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.423142 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5d688f5ffc-k9l8b"] Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.424056 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428003 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428384 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlct6\" (UniqueName: \"kubernetes.io/projected/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-kube-api-access-rlct6\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428481 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428560 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428635 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5g6k\" (UniqueName: \"kubernetes.io/projected/9deaad26-049a-4380-99c4-8d34358367af-kube-api-access-j5g6k\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d38ffd03-48b8-4684-aff0-089081da1320-frr-startup\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428780 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhjx\" (UniqueName: \"kubernetes.io/projected/d38ffd03-48b8-4684-aff0-089081da1320-kube-api-access-ghhjx\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.428942 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-reloader\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.429016 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-conf\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.429095 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-sockets\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.429191 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-metrics\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.429261 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metallb-excludel2\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.429343 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.430393 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-conf\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.430477 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-metrics\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.430553 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-frr-sockets\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.430574 4821 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.430599 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d38ffd03-48b8-4684-aff0-089081da1320-reloader\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.430617 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d38ffd03-48b8-4684-aff0-089081da1320-frr-startup\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.430622 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs podName:d38ffd03-48b8-4684-aff0-089081da1320 nodeName:}" failed. No retries permitted until 2025-09-30 17:15:18.930605168 +0000 UTC m=+714.835651182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs") pod "frr-k8s-xnl56" (UID: "d38ffd03-48b8-4684-aff0-089081da1320") : secret "frr-k8s-certs-secret" not found Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.430639 4821 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.430705 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert podName:9deaad26-049a-4380-99c4-8d34358367af nodeName:}" failed. No retries permitted until 2025-09-30 17:15:18.93068649 +0000 UTC m=+714.835732444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert") pod "frr-k8s-webhook-server-5478bdb765-9r6qh" (UID: "9deaad26-049a-4380-99c4-8d34358367af") : secret "frr-k8s-webhook-server-cert" not found Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.458263 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-k9l8b"] Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.525924 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhjx\" (UniqueName: \"kubernetes.io/projected/d38ffd03-48b8-4684-aff0-089081da1320-kube-api-access-ghhjx\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.535142 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.552698 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-cert\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.552835 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8hf\" (UniqueName: \"kubernetes.io/projected/3057dbb5-a3f4-46ec-a33e-187a35d695a9-kube-api-access-fw8hf\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.535238 4821 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.553351 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metallb-excludel2\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.553415 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs podName:54bb31d4-ac1a-4dcc-acaa-6dd8f4452921 nodeName:}" failed. No retries permitted until 2025-09-30 17:15:19.053368899 +0000 UTC m=+714.958414843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs") pod "speaker-df78k" (UID: "54bb31d4-ac1a-4dcc-acaa-6dd8f4452921") : secret "speaker-certs-secret" not found Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.553515 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-metrics-certs\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.553584 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlct6\" (UniqueName: \"kubernetes.io/projected/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-kube-api-access-rlct6\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.553646 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.545130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5g6k\" (UniqueName: \"kubernetes.io/projected/9deaad26-049a-4380-99c4-8d34358367af-kube-api-access-j5g6k\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.554070 4821 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 17:15:18 crc kubenswrapper[4821]: E0930 17:15:18.554175 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist podName:54bb31d4-ac1a-4dcc-acaa-6dd8f4452921 nodeName:}" failed. No retries permitted until 2025-09-30 17:15:19.054164039 +0000 UTC m=+714.959209983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist") pod "speaker-df78k" (UID: "54bb31d4-ac1a-4dcc-acaa-6dd8f4452921") : secret "metallb-memberlist" not found Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.554635 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metallb-excludel2\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.588762 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlct6\" (UniqueName: \"kubernetes.io/projected/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-kube-api-access-rlct6\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.654898 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-metrics-certs\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.655001 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-cert\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.655034 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8hf\" (UniqueName: \"kubernetes.io/projected/3057dbb5-a3f4-46ec-a33e-187a35d695a9-kube-api-access-fw8hf\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.660707 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-metrics-certs\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.661001 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3057dbb5-a3f4-46ec-a33e-187a35d695a9-cert\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.675930 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8hf\" (UniqueName: \"kubernetes.io/projected/3057dbb5-a3f4-46ec-a33e-187a35d695a9-kube-api-access-fw8hf\") pod \"controller-5d688f5ffc-k9l8b\" (UID: \"3057dbb5-a3f4-46ec-a33e-187a35d695a9\") " pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.737231 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.962303 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.962713 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.965992 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9deaad26-049a-4380-99c4-8d34358367af-cert\") pod \"frr-k8s-webhook-server-5478bdb765-9r6qh\" (UID: \"9deaad26-049a-4380-99c4-8d34358367af\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.966463 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d38ffd03-48b8-4684-aff0-089081da1320-metrics-certs\") pod \"frr-k8s-xnl56\" (UID: \"d38ffd03-48b8-4684-aff0-089081da1320\") " pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:18 crc kubenswrapper[4821]: I0930 17:15:18.972536 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-k9l8b"] Sep 30 17:15:18 crc kubenswrapper[4821]: W0930 17:15:18.980002 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3057dbb5_a3f4_46ec_a33e_187a35d695a9.slice/crio-383137797c9606128018bd28c9da7c2c968d78fb867a5558b0bca1ab0bc56283 WatchSource:0}: Error finding container 383137797c9606128018bd28c9da7c2c968d78fb867a5558b0bca1ab0bc56283: Status 404 returned error can't find the container with id 383137797c9606128018bd28c9da7c2c968d78fb867a5558b0bca1ab0bc56283 Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.063732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.063800 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:19 crc kubenswrapper[4821]: E0930 17:15:19.063908 4821 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 17:15:19 crc kubenswrapper[4821]: E0930 17:15:19.063975 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist podName:54bb31d4-ac1a-4dcc-acaa-6dd8f4452921 nodeName:}" failed. No retries permitted until 2025-09-30 17:15:20.063958836 +0000 UTC m=+715.969004780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist") pod "speaker-df78k" (UID: "54bb31d4-ac1a-4dcc-acaa-6dd8f4452921") : secret "metallb-memberlist" not found Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.068420 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-metrics-certs\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.079997 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-k9l8b" event={"ID":"3057dbb5-a3f4-46ec-a33e-187a35d695a9","Type":"ContainerStarted","Data":"383137797c9606128018bd28c9da7c2c968d78fb867a5558b0bca1ab0bc56283"} Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.210658 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.219563 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.349345 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.349685 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:15:19 crc kubenswrapper[4821]: I0930 17:15:19.429913 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh"] Sep 30 17:15:19 crc kubenswrapper[4821]: W0930 17:15:19.438190 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9deaad26_049a_4380_99c4_8d34358367af.slice/crio-38d4015c47c87dc1a0201262c9682c45064627ebcd34bbf5a7198d7fb618ee8e WatchSource:0}: Error finding container 38d4015c47c87dc1a0201262c9682c45064627ebcd34bbf5a7198d7fb618ee8e: Status 404 returned error can't find the container with id 38d4015c47c87dc1a0201262c9682c45064627ebcd34bbf5a7198d7fb618ee8e Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.075505 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.088180 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" event={"ID":"9deaad26-049a-4380-99c4-8d34358367af","Type":"ContainerStarted","Data":"38d4015c47c87dc1a0201262c9682c45064627ebcd34bbf5a7198d7fb618ee8e"} Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.089739 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-k9l8b" event={"ID":"3057dbb5-a3f4-46ec-a33e-187a35d695a9","Type":"ContainerStarted","Data":"1d2ed3be42804977534dc80e6259d676702f51e79f66ce02073f91d6cfa71f0d"} Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.089790 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-k9l8b" event={"ID":"3057dbb5-a3f4-46ec-a33e-187a35d695a9","Type":"ContainerStarted","Data":"0f50a5c301d0386c8b62e9685991a9f9be5aaa29419f663a1b12e37bd6756aa3"} Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.089789 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54bb31d4-ac1a-4dcc-acaa-6dd8f4452921-memberlist\") pod \"speaker-df78k\" (UID: \"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921\") " pod="metallb-system/speaker-df78k" Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.089954 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.090858 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"f17d1002945196616557a99cc8be5a300efaf53310925c758897c1d723652fa3"} Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.105779 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5d688f5ffc-k9l8b" podStartSLOduration=2.105761947 podStartE2EDuration="2.105761947s" podCreationTimestamp="2025-09-30 17:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:15:20.103116211 +0000 UTC m=+716.008162155" watchObservedRunningTime="2025-09-30 17:15:20.105761947 +0000 UTC m=+716.010807891" Sep 30 17:15:20 crc kubenswrapper[4821]: I0930 17:15:20.210657 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-df78k" Sep 30 17:15:20 crc kubenswrapper[4821]: W0930 17:15:20.231421 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54bb31d4_ac1a_4dcc_acaa_6dd8f4452921.slice/crio-a06e7668c7d764dd72475aec174d88a2ed0117230e6beb0d51402f639e417695 WatchSource:0}: Error finding container a06e7668c7d764dd72475aec174d88a2ed0117230e6beb0d51402f639e417695: Status 404 returned error can't find the container with id a06e7668c7d764dd72475aec174d88a2ed0117230e6beb0d51402f639e417695 Sep 30 17:15:21 crc kubenswrapper[4821]: I0930 17:15:21.099153 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-df78k" event={"ID":"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921","Type":"ContainerStarted","Data":"3dd707682c7f8d5496d6f7d805619f779e0f512b6bd54bf90a04a6c96a01ee77"} Sep 30 17:15:21 crc kubenswrapper[4821]: I0930 17:15:21.099200 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-df78k" event={"ID":"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921","Type":"ContainerStarted","Data":"1bd846268637658c5002811d4dd6448593a6d953da600c7af3002efc13a1fcf1"} Sep 30 17:15:21 crc kubenswrapper[4821]: I0930 17:15:21.099232 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-df78k" event={"ID":"54bb31d4-ac1a-4dcc-acaa-6dd8f4452921","Type":"ContainerStarted","Data":"a06e7668c7d764dd72475aec174d88a2ed0117230e6beb0d51402f639e417695"} Sep 30 17:15:21 crc kubenswrapper[4821]: I0930 17:15:21.099589 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-df78k" Sep 30 17:15:21 crc kubenswrapper[4821]: I0930 17:15:21.115394 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-df78k" podStartSLOduration=3.115379657 podStartE2EDuration="3.115379657s" podCreationTimestamp="2025-09-30 17:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:15:21.112863334 +0000 UTC m=+717.017909278" watchObservedRunningTime="2025-09-30 17:15:21.115379657 +0000 UTC m=+717.020425601" Sep 30 17:15:28 crc kubenswrapper[4821]: I0930 17:15:28.146286 4821 generic.go:334] "Generic (PLEG): container finished" podID="d38ffd03-48b8-4684-aff0-089081da1320" containerID="237e9fc31f255920f1568198e8f4fef8e391279f88d34451c3d3fbfc571773b3" exitCode=0 Sep 30 17:15:28 crc kubenswrapper[4821]: I0930 17:15:28.146339 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerDied","Data":"237e9fc31f255920f1568198e8f4fef8e391279f88d34451c3d3fbfc571773b3"} Sep 30 17:15:28 crc kubenswrapper[4821]: I0930 17:15:28.150404 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" event={"ID":"9deaad26-049a-4380-99c4-8d34358367af","Type":"ContainerStarted","Data":"0b793ca456c436eca6b75b8a950d2134f384c9aaeabd149534ca713eb2d603f7"} Sep 30 17:15:28 crc kubenswrapper[4821]: I0930 17:15:28.150558 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:28 crc kubenswrapper[4821]: I0930 17:15:28.197977 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" podStartSLOduration=2.479730973 podStartE2EDuration="10.197958183s" podCreationTimestamp="2025-09-30 17:15:18 +0000 UTC" firstStartedPulling="2025-09-30 17:15:19.440121366 +0000 UTC m=+715.345167310" lastFinishedPulling="2025-09-30 17:15:27.158348576 +0000 UTC m=+723.063394520" observedRunningTime="2025-09-30 17:15:28.193936623 +0000 UTC m=+724.098982567" watchObservedRunningTime="2025-09-30 17:15:28.197958183 +0000 UTC m=+724.103004127" Sep 30 17:15:29 crc kubenswrapper[4821]: I0930 17:15:29.157371 4821 generic.go:334] "Generic (PLEG): container finished" podID="d38ffd03-48b8-4684-aff0-089081da1320" containerID="0985ce17670ea09379db6d406a450895c96390a23f00e9a24c1e3e9a06a80691" exitCode=0 Sep 30 17:15:29 crc kubenswrapper[4821]: I0930 17:15:29.157423 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerDied","Data":"0985ce17670ea09379db6d406a450895c96390a23f00e9a24c1e3e9a06a80691"} Sep 30 17:15:30 crc kubenswrapper[4821]: I0930 17:15:30.164341 4821 generic.go:334] "Generic (PLEG): container finished" podID="d38ffd03-48b8-4684-aff0-089081da1320" containerID="3d7e1c5cdd3f94f8ea38b022ef3ba38fd0ec959ed59969e5f063e73706a8faa3" exitCode=0 Sep 30 17:15:30 crc kubenswrapper[4821]: I0930 17:15:30.164600 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerDied","Data":"3d7e1c5cdd3f94f8ea38b022ef3ba38fd0ec959ed59969e5f063e73706a8faa3"} Sep 30 17:15:30 crc kubenswrapper[4821]: I0930 17:15:30.214666 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-df78k" Sep 30 17:15:31 crc kubenswrapper[4821]: I0930 17:15:31.181778 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"96bf24d73ee023b52cfd6febae31092081e4fe1fef02f2b3940e21b283012b10"} Sep 30 17:15:31 crc kubenswrapper[4821]: I0930 17:15:31.182069 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"b11d78afe4cfea69cc87931658cb9dff749b5a62e40cff6c93dee2ad92ed2a2f"} Sep 30 17:15:31 crc kubenswrapper[4821]: I0930 17:15:31.182107 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"7c415e2a14923496ce18ab4fb4d7f1438b84e2813bca95401d0df9b28606d5c1"} Sep 30 17:15:31 crc kubenswrapper[4821]: I0930 17:15:31.182117 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"70bc0a89ad2e8280b789bc5126bc55bf0298af3c0ad9f619887e0cdb3cfc0d81"} Sep 30 17:15:31 crc kubenswrapper[4821]: I0930 17:15:31.182125 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"db0106e7f267bafff61429c57176bde0490249f78d77b02d73e15ab600d35b4c"} Sep 30 17:15:32 crc kubenswrapper[4821]: I0930 17:15:32.191005 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xnl56" event={"ID":"d38ffd03-48b8-4684-aff0-089081da1320","Type":"ContainerStarted","Data":"fd10017129f07d59673a14cfb0e308231b8999abdeeda4b4a0f982800e3bb390"} Sep 30 17:15:32 crc kubenswrapper[4821]: I0930 17:15:32.191614 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:32 crc kubenswrapper[4821]: I0930 17:15:32.213484 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xnl56" podStartSLOduration=6.395223574 podStartE2EDuration="14.213463998s" podCreationTimestamp="2025-09-30 17:15:18 +0000 UTC" firstStartedPulling="2025-09-30 17:15:19.321744064 +0000 UTC m=+715.226790008" lastFinishedPulling="2025-09-30 17:15:27.139984488 +0000 UTC m=+723.045030432" observedRunningTime="2025-09-30 17:15:32.209830468 +0000 UTC m=+728.114876412" watchObservedRunningTime="2025-09-30 17:15:32.213463998 +0000 UTC m=+728.118509942" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.014971 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.015969 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.018280 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.018778 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.020352 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qjthp" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.035473 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.064679 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78xj\" (UniqueName: \"kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj\") pod \"openstack-operator-index-4f9j5\" (UID: \"31423359-6a69-445c-8cef-ab04d2c509a6\") " pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.165711 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78xj\" (UniqueName: \"kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj\") pod \"openstack-operator-index-4f9j5\" (UID: \"31423359-6a69-445c-8cef-ab04d2c509a6\") " pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.186285 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78xj\" (UniqueName: \"kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj\") pod \"openstack-operator-index-4f9j5\" (UID: \"31423359-6a69-445c-8cef-ab04d2c509a6\") " pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.343331 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:33 crc kubenswrapper[4821]: I0930 17:15:33.693633 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:34 crc kubenswrapper[4821]: I0930 17:15:34.202581 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4f9j5" event={"ID":"31423359-6a69-445c-8cef-ab04d2c509a6","Type":"ContainerStarted","Data":"ea0d25d15843d0737c0e3ad3517dce3d0ebf74ea00086aec93c89fd585fa91e0"} Sep 30 17:15:34 crc kubenswrapper[4821]: I0930 17:15:34.221102 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:34 crc kubenswrapper[4821]: I0930 17:15:34.258267 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:36 crc kubenswrapper[4821]: I0930 17:15:36.214621 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4f9j5" event={"ID":"31423359-6a69-445c-8cef-ab04d2c509a6","Type":"ContainerStarted","Data":"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791"} Sep 30 17:15:36 crc kubenswrapper[4821]: I0930 17:15:36.234262 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4f9j5" podStartSLOduration=0.973730569 podStartE2EDuration="3.234242625s" podCreationTimestamp="2025-09-30 17:15:33 +0000 UTC" firstStartedPulling="2025-09-30 17:15:33.702644308 +0000 UTC m=+729.607690242" lastFinishedPulling="2025-09-30 17:15:35.963156354 +0000 UTC m=+731.868202298" observedRunningTime="2025-09-30 17:15:36.230503121 +0000 UTC m=+732.135549065" watchObservedRunningTime="2025-09-30 17:15:36.234242625 +0000 UTC m=+732.139288569" Sep 30 17:15:36 crc kubenswrapper[4821]: I0930 17:15:36.394130 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.002808 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vwq7v"] Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.003737 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.014663 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vwq7v"] Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.115685 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cgh\" (UniqueName: \"kubernetes.io/projected/febb682d-9e87-4109-957d-96338ba83785-kube-api-access-c7cgh\") pod \"openstack-operator-index-vwq7v\" (UID: \"febb682d-9e87-4109-957d-96338ba83785\") " pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.217097 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7cgh\" (UniqueName: \"kubernetes.io/projected/febb682d-9e87-4109-957d-96338ba83785-kube-api-access-c7cgh\") pod \"openstack-operator-index-vwq7v\" (UID: \"febb682d-9e87-4109-957d-96338ba83785\") " pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.243219 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7cgh\" (UniqueName: \"kubernetes.io/projected/febb682d-9e87-4109-957d-96338ba83785-kube-api-access-c7cgh\") pod \"openstack-operator-index-vwq7v\" (UID: \"febb682d-9e87-4109-957d-96338ba83785\") " pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.320061 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:37 crc kubenswrapper[4821]: I0930 17:15:37.705364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vwq7v"] Sep 30 17:15:37 crc kubenswrapper[4821]: W0930 17:15:37.709896 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfebb682d_9e87_4109_957d_96338ba83785.slice/crio-6f770ff8665439aea15463081bbe2510a14bbd96a53f1f0b8444d1345bae021d WatchSource:0}: Error finding container 6f770ff8665439aea15463081bbe2510a14bbd96a53f1f0b8444d1345bae021d: Status 404 returned error can't find the container with id 6f770ff8665439aea15463081bbe2510a14bbd96a53f1f0b8444d1345bae021d Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.229715 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vwq7v" event={"ID":"febb682d-9e87-4109-957d-96338ba83785","Type":"ContainerStarted","Data":"600dd05a31548d6d79f9251eb1741fc3ae11e2ec970821e229d08d25f3f83e12"} Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.231247 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vwq7v" event={"ID":"febb682d-9e87-4109-957d-96338ba83785","Type":"ContainerStarted","Data":"6f770ff8665439aea15463081bbe2510a14bbd96a53f1f0b8444d1345bae021d"} Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.229810 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-4f9j5" podUID="31423359-6a69-445c-8cef-ab04d2c509a6" containerName="registry-server" containerID="cri-o://b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791" gracePeriod=2 Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.253253 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vwq7v" podStartSLOduration=2.140705031 podStartE2EDuration="2.253215817s" podCreationTimestamp="2025-09-30 17:15:36 +0000 UTC" firstStartedPulling="2025-09-30 17:15:37.71443631 +0000 UTC m=+733.619482254" lastFinishedPulling="2025-09-30 17:15:37.826947096 +0000 UTC m=+733.731993040" observedRunningTime="2025-09-30 17:15:38.249750271 +0000 UTC m=+734.154796235" watchObservedRunningTime="2025-09-30 17:15:38.253215817 +0000 UTC m=+734.158261761" Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.566897 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.646275 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78xj\" (UniqueName: \"kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj\") pod \"31423359-6a69-445c-8cef-ab04d2c509a6\" (UID: \"31423359-6a69-445c-8cef-ab04d2c509a6\") " Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.650987 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj" (OuterVolumeSpecName: "kube-api-access-b78xj") pod "31423359-6a69-445c-8cef-ab04d2c509a6" (UID: "31423359-6a69-445c-8cef-ab04d2c509a6"). InnerVolumeSpecName "kube-api-access-b78xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.742324 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5d688f5ffc-k9l8b" Sep 30 17:15:38 crc kubenswrapper[4821]: I0930 17:15:38.748332 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b78xj\" (UniqueName: \"kubernetes.io/projected/31423359-6a69-445c-8cef-ab04d2c509a6-kube-api-access-b78xj\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.216440 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-9r6qh" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.236925 4821 generic.go:334] "Generic (PLEG): container finished" podID="31423359-6a69-445c-8cef-ab04d2c509a6" containerID="b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791" exitCode=0 Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.237055 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4f9j5" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.237606 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4f9j5" event={"ID":"31423359-6a69-445c-8cef-ab04d2c509a6","Type":"ContainerDied","Data":"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791"} Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.237636 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4f9j5" event={"ID":"31423359-6a69-445c-8cef-ab04d2c509a6","Type":"ContainerDied","Data":"ea0d25d15843d0737c0e3ad3517dce3d0ebf74ea00086aec93c89fd585fa91e0"} Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.237655 4821 scope.go:117] "RemoveContainer" containerID="b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.257179 4821 scope.go:117] "RemoveContainer" containerID="b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791" Sep 30 17:15:39 crc kubenswrapper[4821]: E0930 17:15:39.257787 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791\": container with ID starting with b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791 not found: ID does not exist" containerID="b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.257935 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791"} err="failed to get container status \"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791\": rpc error: code = NotFound desc = could not find container \"b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791\": container with ID starting with b80c74834ac0b0bcfab75f1fa4f2355e45e775117bda6f4135e2dc6dad2a9791 not found: ID does not exist" Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.267189 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:39 crc kubenswrapper[4821]: I0930 17:15:39.274917 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-4f9j5"] Sep 30 17:15:40 crc kubenswrapper[4821]: I0930 17:15:40.713185 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31423359-6a69-445c-8cef-ab04d2c509a6" path="/var/lib/kubelet/pods/31423359-6a69-445c-8cef-ab04d2c509a6/volumes" Sep 30 17:15:47 crc kubenswrapper[4821]: I0930 17:15:47.320564 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:47 crc kubenswrapper[4821]: I0930 17:15:47.321382 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:47 crc kubenswrapper[4821]: I0930 17:15:47.346447 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:48 crc kubenswrapper[4821]: I0930 17:15:48.312096 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vwq7v" Sep 30 17:15:49 crc kubenswrapper[4821]: I0930 17:15:49.223549 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xnl56" Sep 30 17:15:49 crc kubenswrapper[4821]: I0930 17:15:49.349677 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:15:49 crc kubenswrapper[4821]: I0930 17:15:49.349735 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.225742 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.225939 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" podUID="0353afa5-86b4-40c4-9633-c75046a0e84d" containerName="controller-manager" containerID="cri-o://79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1" gracePeriod=30 Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.319037 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.319261 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerName="route-controller-manager" containerID="cri-o://529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4" gracePeriod=30 Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.587164 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.718190 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bb498d678-kfdhk"] Sep 30 17:15:50 crc kubenswrapper[4821]: E0930 17:15:50.718545 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31423359-6a69-445c-8cef-ab04d2c509a6" containerName="registry-server" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.718588 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="31423359-6a69-445c-8cef-ab04d2c509a6" containerName="registry-server" Sep 30 17:15:50 crc kubenswrapper[4821]: E0930 17:15:50.718609 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0353afa5-86b4-40c4-9633-c75046a0e84d" containerName="controller-manager" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.718616 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0353afa5-86b4-40c4-9633-c75046a0e84d" containerName="controller-manager" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.718870 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="31423359-6a69-445c-8cef-ab04d2c509a6" containerName="registry-server" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.718896 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0353afa5-86b4-40c4-9633-c75046a0e84d" containerName="controller-manager" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.719854 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca\") pod \"0353afa5-86b4-40c4-9633-c75046a0e84d\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.719903 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles\") pod \"0353afa5-86b4-40c4-9633-c75046a0e84d\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.719971 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert\") pod \"0353afa5-86b4-40c4-9633-c75046a0e84d\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.720106 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config\") pod \"0353afa5-86b4-40c4-9633-c75046a0e84d\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.720162 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dplxd\" (UniqueName: \"kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd\") pod \"0353afa5-86b4-40c4-9633-c75046a0e84d\" (UID: \"0353afa5-86b4-40c4-9633-c75046a0e84d\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.720516 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca" (OuterVolumeSpecName: "client-ca") pod "0353afa5-86b4-40c4-9633-c75046a0e84d" (UID: "0353afa5-86b4-40c4-9633-c75046a0e84d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.720617 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.722017 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0353afa5-86b4-40c4-9633-c75046a0e84d" (UID: "0353afa5-86b4-40c4-9633-c75046a0e84d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.722577 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config" (OuterVolumeSpecName: "config") pod "0353afa5-86b4-40c4-9633-c75046a0e84d" (UID: "0353afa5-86b4-40c4-9633-c75046a0e84d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.735493 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.736586 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd" (OuterVolumeSpecName: "kube-api-access-dplxd") pod "0353afa5-86b4-40c4-9633-c75046a0e84d" (UID: "0353afa5-86b4-40c4-9633-c75046a0e84d"). InnerVolumeSpecName "kube-api-access-dplxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.738879 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb498d678-kfdhk"] Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.739218 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0353afa5-86b4-40c4-9633-c75046a0e84d" (UID: "0353afa5-86b4-40c4-9633-c75046a0e84d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821103 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config\") pod \"0e281506-b9d7-4e26-964f-e472f7f2661f\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hczkz\" (UniqueName: \"kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz\") pod \"0e281506-b9d7-4e26-964f-e472f7f2661f\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821193 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca\") pod \"0e281506-b9d7-4e26-964f-e472f7f2661f\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821215 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert\") pod \"0e281506-b9d7-4e26-964f-e472f7f2661f\" (UID: \"0e281506-b9d7-4e26-964f-e472f7f2661f\") " Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821416 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kcmd\" (UniqueName: \"kubernetes.io/projected/828df442-4e99-4404-aa45-011b50214209-kube-api-access-2kcmd\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-config\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821492 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-proxy-ca-bundles\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821520 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-client-ca\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821545 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828df442-4e99-4404-aa45-011b50214209-serving-cert\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821597 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821611 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821622 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0353afa5-86b4-40c4-9633-c75046a0e84d-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821632 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0353afa5-86b4-40c4-9633-c75046a0e84d-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.821642 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dplxd\" (UniqueName: \"kubernetes.io/projected/0353afa5-86b4-40c4-9633-c75046a0e84d-kube-api-access-dplxd\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.822653 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config" (OuterVolumeSpecName: "config") pod "0e281506-b9d7-4e26-964f-e472f7f2661f" (UID: "0e281506-b9d7-4e26-964f-e472f7f2661f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.823215 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e281506-b9d7-4e26-964f-e472f7f2661f" (UID: "0e281506-b9d7-4e26-964f-e472f7f2661f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.824826 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz" (OuterVolumeSpecName: "kube-api-access-hczkz") pod "0e281506-b9d7-4e26-964f-e472f7f2661f" (UID: "0e281506-b9d7-4e26-964f-e472f7f2661f"). InnerVolumeSpecName "kube-api-access-hczkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.826144 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e281506-b9d7-4e26-964f-e472f7f2661f" (UID: "0e281506-b9d7-4e26-964f-e472f7f2661f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kcmd\" (UniqueName: \"kubernetes.io/projected/828df442-4e99-4404-aa45-011b50214209-kube-api-access-2kcmd\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923120 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-config\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923154 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-proxy-ca-bundles\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923180 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-client-ca\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923207 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828df442-4e99-4404-aa45-011b50214209-serving-cert\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923247 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923258 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hczkz\" (UniqueName: \"kubernetes.io/projected/0e281506-b9d7-4e26-964f-e472f7f2661f-kube-api-access-hczkz\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923266 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e281506-b9d7-4e26-964f-e472f7f2661f-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.923274 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e281506-b9d7-4e26-964f-e472f7f2661f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.930054 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828df442-4e99-4404-aa45-011b50214209-serving-cert\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.943432 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-client-ca\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.944098 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-proxy-ca-bundles\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.945401 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828df442-4e99-4404-aa45-011b50214209-config\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:50 crc kubenswrapper[4821]: I0930 17:15:50.967688 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kcmd\" (UniqueName: \"kubernetes.io/projected/828df442-4e99-4404-aa45-011b50214209-kube-api-access-2kcmd\") pod \"controller-manager-bb498d678-kfdhk\" (UID: \"828df442-4e99-4404-aa45-011b50214209\") " pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.044620 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.263258 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb498d678-kfdhk"] Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.309183 4821 generic.go:334] "Generic (PLEG): container finished" podID="0353afa5-86b4-40c4-9633-c75046a0e84d" containerID="79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1" exitCode=0 Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.309458 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" event={"ID":"0353afa5-86b4-40c4-9633-c75046a0e84d","Type":"ContainerDied","Data":"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1"} Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.309563 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" event={"ID":"0353afa5-86b4-40c4-9633-c75046a0e84d","Type":"ContainerDied","Data":"41df9de0497cdb27c145bf4a1cf2d29e86e7e648e1c3a5b7f40afd2ee1a103c3"} Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.309645 4821 scope.go:117] "RemoveContainer" containerID="79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.309818 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8pkhf" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.316131 4821 generic.go:334] "Generic (PLEG): container finished" podID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerID="529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4" exitCode=0 Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.316263 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.317166 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" event={"ID":"0e281506-b9d7-4e26-964f-e472f7f2661f","Type":"ContainerDied","Data":"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4"} Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.317203 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r" event={"ID":"0e281506-b9d7-4e26-964f-e472f7f2661f","Type":"ContainerDied","Data":"1142dffcc9c5f8f001dfe8f38d65d45ce5b8083d88785af3cdb887962639d8c5"} Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.318386 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" event={"ID":"828df442-4e99-4404-aa45-011b50214209","Type":"ContainerStarted","Data":"d2fb2b9906836b997e8e41da523268ba25e0cccfc74d3c3c256f4d559d0d9a5e"} Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.345838 4821 scope.go:117] "RemoveContainer" containerID="79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1" Sep 30 17:15:51 crc kubenswrapper[4821]: E0930 17:15:51.357158 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1\": container with ID starting with 79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1 not found: ID does not exist" containerID="79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.357408 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1"} err="failed to get container status \"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1\": rpc error: code = NotFound desc = could not find container \"79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1\": container with ID starting with 79769a4b6068e1d90bec8b760e02c92033923741910a5e5d0769de8f2d04cfa1 not found: ID does not exist" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.357519 4821 scope.go:117] "RemoveContainer" containerID="529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.401231 4821 scope.go:117] "RemoveContainer" containerID="529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4" Sep 30 17:15:51 crc kubenswrapper[4821]: E0930 17:15:51.405631 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4\": container with ID starting with 529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4 not found: ID does not exist" containerID="529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.405671 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4"} err="failed to get container status \"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4\": rpc error: code = NotFound desc = could not find container \"529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4\": container with ID starting with 529af75ef8d605ae2ae99aaf0a05cd58f62939ab87ba7673516ed20fd04c1ef4 not found: ID does not exist" Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.420213 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.425952 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8pkhf"] Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.432828 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:15:51 crc kubenswrapper[4821]: I0930 17:15:51.437619 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bsh9r"] Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.327603 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" event={"ID":"828df442-4e99-4404-aa45-011b50214209","Type":"ContainerStarted","Data":"8a13be8e309559b932d4755b81a9bd16a58e11e4636c1af6b02a16b95aa7579b"} Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.327828 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.334110 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.357434 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bb498d678-kfdhk" podStartSLOduration=2.357417889 podStartE2EDuration="2.357417889s" podCreationTimestamp="2025-09-30 17:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:15:52.355122173 +0000 UTC m=+748.260168127" watchObservedRunningTime="2025-09-30 17:15:52.357417889 +0000 UTC m=+748.262463833" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.714261 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0353afa5-86b4-40c4-9633-c75046a0e84d" path="/var/lib/kubelet/pods/0353afa5-86b4-40c4-9633-c75046a0e84d/volumes" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.715262 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" path="/var/lib/kubelet/pods/0e281506-b9d7-4e26-964f-e472f7f2661f/volumes" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.757837 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7"] Sep 30 17:15:52 crc kubenswrapper[4821]: E0930 17:15:52.758613 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerName="route-controller-manager" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.758773 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerName="route-controller-manager" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.759058 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e281506-b9d7-4e26-964f-e472f7f2661f" containerName="route-controller-manager" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.759893 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.763434 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.763977 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.764304 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.764654 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.764793 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.764921 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.766545 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7"] Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.852783 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909dd4cd-1bf8-4982-ba13-387f92b71dc7-serving-cert\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.852847 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xp2\" (UniqueName: \"kubernetes.io/projected/909dd4cd-1bf8-4982-ba13-387f92b71dc7-kube-api-access-87xp2\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.852875 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-client-ca\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.853011 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-config\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.953678 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-config\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.953748 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909dd4cd-1bf8-4982-ba13-387f92b71dc7-serving-cert\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.953770 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xp2\" (UniqueName: \"kubernetes.io/projected/909dd4cd-1bf8-4982-ba13-387f92b71dc7-kube-api-access-87xp2\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.953791 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-client-ca\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.954826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-client-ca\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.956035 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909dd4cd-1bf8-4982-ba13-387f92b71dc7-config\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.964451 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/909dd4cd-1bf8-4982-ba13-387f92b71dc7-serving-cert\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:52 crc kubenswrapper[4821]: I0930 17:15:52.981236 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xp2\" (UniqueName: \"kubernetes.io/projected/909dd4cd-1bf8-4982-ba13-387f92b71dc7-kube-api-access-87xp2\") pod \"route-controller-manager-5bcf787dcc-bchw7\" (UID: \"909dd4cd-1bf8-4982-ba13-387f92b71dc7\") " pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:53 crc kubenswrapper[4821]: I0930 17:15:53.078284 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:53 crc kubenswrapper[4821]: I0930 17:15:53.498895 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7"] Sep 30 17:15:54 crc kubenswrapper[4821]: I0930 17:15:54.346854 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" event={"ID":"909dd4cd-1bf8-4982-ba13-387f92b71dc7","Type":"ContainerStarted","Data":"2c513beea58bcde680e7deb1d8f4a1af098e731b97639a9bf2381a8cd4989bbf"} Sep 30 17:15:54 crc kubenswrapper[4821]: I0930 17:15:54.347180 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" event={"ID":"909dd4cd-1bf8-4982-ba13-387f92b71dc7","Type":"ContainerStarted","Data":"f3205cdaaa8b407ece5380800d1b00015e4ca9398019b3a1e56a6a277aa25ebc"} Sep 30 17:15:54 crc kubenswrapper[4821]: I0930 17:15:54.347363 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:54 crc kubenswrapper[4821]: I0930 17:15:54.351857 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" Sep 30 17:15:54 crc kubenswrapper[4821]: I0930 17:15:54.365332 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bcf787dcc-bchw7" podStartSLOduration=4.365315556 podStartE2EDuration="4.365315556s" podCreationTimestamp="2025-09-30 17:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:15:54.362155387 +0000 UTC m=+750.267201331" watchObservedRunningTime="2025-09-30 17:15:54.365315556 +0000 UTC m=+750.270361500" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.784145 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n"] Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.785808 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.788151 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mbwbb" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.796877 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n"] Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.893905 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mfm\" (UniqueName: \"kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.893960 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.894013 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.995397 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.995507 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mfm\" (UniqueName: \"kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.995555 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.996265 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:55 crc kubenswrapper[4821]: I0930 17:15:55.996311 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:56 crc kubenswrapper[4821]: I0930 17:15:56.015037 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mfm\" (UniqueName: \"kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm\") pod \"53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:56 crc kubenswrapper[4821]: I0930 17:15:56.104810 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:15:56 crc kubenswrapper[4821]: I0930 17:15:56.506203 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n"] Sep 30 17:15:57 crc kubenswrapper[4821]: I0930 17:15:57.362832 4821 generic.go:334] "Generic (PLEG): container finished" podID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerID="8d72b7bf11d3c4524b455011317c688e3b743a58edbd8e2c6fafe22c706d9ffa" exitCode=0 Sep 30 17:15:57 crc kubenswrapper[4821]: I0930 17:15:57.362885 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" event={"ID":"e84be49a-d25d-43d4-a2ec-f2a1de45d92e","Type":"ContainerDied","Data":"8d72b7bf11d3c4524b455011317c688e3b743a58edbd8e2c6fafe22c706d9ffa"} Sep 30 17:15:57 crc kubenswrapper[4821]: I0930 17:15:57.363090 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" event={"ID":"e84be49a-d25d-43d4-a2ec-f2a1de45d92e","Type":"ContainerStarted","Data":"d1cfab35358e92f9b87c4bd43389b7967ca5ddf306ba9a235587ec30c54c40f1"} Sep 30 17:15:57 crc kubenswrapper[4821]: I0930 17:15:57.794309 4821 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:15:58 crc kubenswrapper[4821]: I0930 17:15:58.379973 4821 generic.go:334] "Generic (PLEG): container finished" podID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerID="907bce90a753570f7c95303dd3a3bd31eaf73a121745f85aab65e473e18fead6" exitCode=0 Sep 30 17:15:58 crc kubenswrapper[4821]: I0930 17:15:58.380016 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" event={"ID":"e84be49a-d25d-43d4-a2ec-f2a1de45d92e","Type":"ContainerDied","Data":"907bce90a753570f7c95303dd3a3bd31eaf73a121745f85aab65e473e18fead6"} Sep 30 17:15:59 crc kubenswrapper[4821]: I0930 17:15:59.388386 4821 generic.go:334] "Generic (PLEG): container finished" podID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerID="39a6acd450432a840f15b4b73bc7ea35132519597703d4f088444a6f29614877" exitCode=0 Sep 30 17:15:59 crc kubenswrapper[4821]: I0930 17:15:59.388465 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" event={"ID":"e84be49a-d25d-43d4-a2ec-f2a1de45d92e","Type":"ContainerDied","Data":"39a6acd450432a840f15b4b73bc7ea35132519597703d4f088444a6f29614877"} Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.724855 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.860552 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util\") pod \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.860594 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle\") pod \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.860643 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7mfm\" (UniqueName: \"kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm\") pod \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\" (UID: \"e84be49a-d25d-43d4-a2ec-f2a1de45d92e\") " Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.861463 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle" (OuterVolumeSpecName: "bundle") pod "e84be49a-d25d-43d4-a2ec-f2a1de45d92e" (UID: "e84be49a-d25d-43d4-a2ec-f2a1de45d92e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.864963 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm" (OuterVolumeSpecName: "kube-api-access-t7mfm") pod "e84be49a-d25d-43d4-a2ec-f2a1de45d92e" (UID: "e84be49a-d25d-43d4-a2ec-f2a1de45d92e"). InnerVolumeSpecName "kube-api-access-t7mfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.877331 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util" (OuterVolumeSpecName: "util") pod "e84be49a-d25d-43d4-a2ec-f2a1de45d92e" (UID: "e84be49a-d25d-43d4-a2ec-f2a1de45d92e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.961588 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.961622 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:00 crc kubenswrapper[4821]: I0930 17:16:00.961634 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7mfm\" (UniqueName: \"kubernetes.io/projected/e84be49a-d25d-43d4-a2ec-f2a1de45d92e-kube-api-access-t7mfm\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:01 crc kubenswrapper[4821]: I0930 17:16:01.403505 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" event={"ID":"e84be49a-d25d-43d4-a2ec-f2a1de45d92e","Type":"ContainerDied","Data":"d1cfab35358e92f9b87c4bd43389b7967ca5ddf306ba9a235587ec30c54c40f1"} Sep 30 17:16:01 crc kubenswrapper[4821]: I0930 17:16:01.403568 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cfab35358e92f9b87c4bd43389b7967ca5ddf306ba9a235587ec30c54c40f1" Sep 30 17:16:01 crc kubenswrapper[4821]: I0930 17:16:01.403569 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.544395 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr"] Sep 30 17:16:08 crc kubenswrapper[4821]: E0930 17:16:08.545045 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="pull" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.545057 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="pull" Sep 30 17:16:08 crc kubenswrapper[4821]: E0930 17:16:08.545100 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="extract" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.545107 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="extract" Sep 30 17:16:08 crc kubenswrapper[4821]: E0930 17:16:08.545117 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="util" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.545123 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="util" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.545229 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84be49a-d25d-43d4-a2ec-f2a1de45d92e" containerName="extract" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.545782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.548387 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-tcttk" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.569204 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr"] Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.671063 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjc6\" (UniqueName: \"kubernetes.io/projected/8e553a25-d53e-410a-9d98-288aeb2eb59e-kube-api-access-mrjc6\") pod \"openstack-operator-controller-operator-d8fdfd448-rrrhr\" (UID: \"8e553a25-d53e-410a-9d98-288aeb2eb59e\") " pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.772787 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjc6\" (UniqueName: \"kubernetes.io/projected/8e553a25-d53e-410a-9d98-288aeb2eb59e-kube-api-access-mrjc6\") pod \"openstack-operator-controller-operator-d8fdfd448-rrrhr\" (UID: \"8e553a25-d53e-410a-9d98-288aeb2eb59e\") " pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.791297 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjc6\" (UniqueName: \"kubernetes.io/projected/8e553a25-d53e-410a-9d98-288aeb2eb59e-kube-api-access-mrjc6\") pod \"openstack-operator-controller-operator-d8fdfd448-rrrhr\" (UID: \"8e553a25-d53e-410a-9d98-288aeb2eb59e\") " pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:08 crc kubenswrapper[4821]: I0930 17:16:08.861190 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:09 crc kubenswrapper[4821]: I0930 17:16:09.336191 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr"] Sep 30 17:16:09 crc kubenswrapper[4821]: I0930 17:16:09.456319 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" event={"ID":"8e553a25-d53e-410a-9d98-288aeb2eb59e","Type":"ContainerStarted","Data":"1e5b550a80f8491783ad0b71b44944365bd23363a7ce1248f5384d728d6dbd67"} Sep 30 17:16:13 crc kubenswrapper[4821]: I0930 17:16:13.490404 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" event={"ID":"8e553a25-d53e-410a-9d98-288aeb2eb59e","Type":"ContainerStarted","Data":"3ec4a5f673bcdfc12e3c753035fba6384bbe9a04e85e527e1b536cb3b01b3991"} Sep 30 17:16:16 crc kubenswrapper[4821]: I0930 17:16:16.509567 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" event={"ID":"8e553a25-d53e-410a-9d98-288aeb2eb59e","Type":"ContainerStarted","Data":"99ecb727a65d396959a36b2cf69756a6fd46e48284027b7c86441c1565a7b460"} Sep 30 17:16:16 crc kubenswrapper[4821]: I0930 17:16:16.511025 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:16 crc kubenswrapper[4821]: I0930 17:16:16.536947 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" podStartSLOduration=2.507344583 podStartE2EDuration="8.536932277s" podCreationTimestamp="2025-09-30 17:16:08 +0000 UTC" firstStartedPulling="2025-09-30 17:16:09.341363664 +0000 UTC m=+765.246409628" lastFinishedPulling="2025-09-30 17:16:15.370951378 +0000 UTC m=+771.275997322" observedRunningTime="2025-09-30 17:16:16.53542875 +0000 UTC m=+772.440474694" watchObservedRunningTime="2025-09-30 17:16:16.536932277 +0000 UTC m=+772.441978221" Sep 30 17:16:18 crc kubenswrapper[4821]: I0930 17:16:18.521161 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-d8fdfd448-rrrhr" Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.350174 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.350269 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.350321 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.351001 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.351068 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee" gracePeriod=600 Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.527181 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee" exitCode=0 Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.527369 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee"} Sep 30 17:16:19 crc kubenswrapper[4821]: I0930 17:16:19.527722 4821 scope.go:117] "RemoveContainer" containerID="a96195deaa92fdbd5e1ddc64c627aa78cd37aa2134f2026cfd9b64821097de61" Sep 30 17:16:20 crc kubenswrapper[4821]: I0930 17:16:20.534097 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6"} Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.802798 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.803879 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.817339 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.837571 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.837881 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7gc\" (UniqueName: \"kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.838009 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.939012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.939095 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd7gc\" (UniqueName: \"kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.939145 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.939647 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.939739 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:21 crc kubenswrapper[4821]: I0930 17:16:21.968975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd7gc\" (UniqueName: \"kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc\") pod \"certified-operators-fzc4p\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:22 crc kubenswrapper[4821]: I0930 17:16:22.117965 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:22 crc kubenswrapper[4821]: I0930 17:16:22.603625 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:22 crc kubenswrapper[4821]: W0930 17:16:22.612332 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59ddbb0_c6ee_4f6a_9fa1_1dc7aa45d944.slice/crio-59b45cc9c22b7cd791d50f47fe49385bbe64dcca18bd3f62f0eb9cafe5baf675 WatchSource:0}: Error finding container 59b45cc9c22b7cd791d50f47fe49385bbe64dcca18bd3f62f0eb9cafe5baf675: Status 404 returned error can't find the container with id 59b45cc9c22b7cd791d50f47fe49385bbe64dcca18bd3f62f0eb9cafe5baf675 Sep 30 17:16:23 crc kubenswrapper[4821]: I0930 17:16:23.551120 4821 generic.go:334] "Generic (PLEG): container finished" podID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerID="6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569" exitCode=0 Sep 30 17:16:23 crc kubenswrapper[4821]: I0930 17:16:23.551206 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerDied","Data":"6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569"} Sep 30 17:16:23 crc kubenswrapper[4821]: I0930 17:16:23.551233 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerStarted","Data":"59b45cc9c22b7cd791d50f47fe49385bbe64dcca18bd3f62f0eb9cafe5baf675"} Sep 30 17:16:24 crc kubenswrapper[4821]: I0930 17:16:24.558019 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerStarted","Data":"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a"} Sep 30 17:16:25 crc kubenswrapper[4821]: I0930 17:16:25.565460 4821 generic.go:334] "Generic (PLEG): container finished" podID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerID="8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a" exitCode=0 Sep 30 17:16:25 crc kubenswrapper[4821]: I0930 17:16:25.565502 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerDied","Data":"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a"} Sep 30 17:16:26 crc kubenswrapper[4821]: I0930 17:16:26.572819 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerStarted","Data":"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743"} Sep 30 17:16:26 crc kubenswrapper[4821]: I0930 17:16:26.593340 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fzc4p" podStartSLOduration=3.093525016 podStartE2EDuration="5.593324609s" podCreationTimestamp="2025-09-30 17:16:21 +0000 UTC" firstStartedPulling="2025-09-30 17:16:23.55482011 +0000 UTC m=+779.459866054" lastFinishedPulling="2025-09-30 17:16:26.054619703 +0000 UTC m=+781.959665647" observedRunningTime="2025-09-30 17:16:26.588442608 +0000 UTC m=+782.493488552" watchObservedRunningTime="2025-09-30 17:16:26.593324609 +0000 UTC m=+782.498370553" Sep 30 17:16:32 crc kubenswrapper[4821]: I0930 17:16:32.118724 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:32 crc kubenswrapper[4821]: I0930 17:16:32.119307 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:32 crc kubenswrapper[4821]: I0930 17:16:32.160815 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:32 crc kubenswrapper[4821]: I0930 17:16:32.675596 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:32 crc kubenswrapper[4821]: I0930 17:16:32.717378 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:34 crc kubenswrapper[4821]: I0930 17:16:34.647542 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fzc4p" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="registry-server" containerID="cri-o://6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743" gracePeriod=2 Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.044792 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.047035 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd7gc\" (UniqueName: \"kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc\") pod \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.047072 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content\") pod \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.047132 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities\") pod \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\" (UID: \"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944\") " Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.048683 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities" (OuterVolumeSpecName: "utilities") pod "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" (UID: "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.054255 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc" (OuterVolumeSpecName: "kube-api-access-wd7gc") pod "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" (UID: "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944"). InnerVolumeSpecName "kube-api-access-wd7gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.148908 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.148940 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd7gc\" (UniqueName: \"kubernetes.io/projected/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-kube-api-access-wd7gc\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.656751 4821 generic.go:334] "Generic (PLEG): container finished" podID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerID="6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743" exitCode=0 Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.656803 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerDied","Data":"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743"} Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.656826 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc4p" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.656837 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc4p" event={"ID":"c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944","Type":"ContainerDied","Data":"59b45cc9c22b7cd791d50f47fe49385bbe64dcca18bd3f62f0eb9cafe5baf675"} Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.656850 4821 scope.go:117] "RemoveContainer" containerID="6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.674804 4821 scope.go:117] "RemoveContainer" containerID="8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.702288 4821 scope.go:117] "RemoveContainer" containerID="6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.719353 4821 scope.go:117] "RemoveContainer" containerID="6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743" Sep 30 17:16:35 crc kubenswrapper[4821]: E0930 17:16:35.720469 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743\": container with ID starting with 6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743 not found: ID does not exist" containerID="6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.720513 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743"} err="failed to get container status \"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743\": rpc error: code = NotFound desc = could not find container \"6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743\": container with ID starting with 6e7d4e231c4b06975a774a573635df18956e46daf32bc583a242c22fe312e743 not found: ID does not exist" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.720542 4821 scope.go:117] "RemoveContainer" containerID="8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a" Sep 30 17:16:35 crc kubenswrapper[4821]: E0930 17:16:35.721357 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a\": container with ID starting with 8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a not found: ID does not exist" containerID="8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.721408 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a"} err="failed to get container status \"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a\": rpc error: code = NotFound desc = could not find container \"8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a\": container with ID starting with 8cddde06ba17d1f392284357ca55248c2fdb5268b59ca50adf588b36f2c9eb7a not found: ID does not exist" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.721441 4821 scope.go:117] "RemoveContainer" containerID="6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569" Sep 30 17:16:35 crc kubenswrapper[4821]: E0930 17:16:35.721842 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569\": container with ID starting with 6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569 not found: ID does not exist" containerID="6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.721878 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569"} err="failed to get container status \"6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569\": rpc error: code = NotFound desc = could not find container \"6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569\": container with ID starting with 6221097f70a572f414b86d1b4adb0268f694940ae7d9958eb136432682359569 not found: ID does not exist" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.727927 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" (UID: "c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.755752 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.990816 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:35 crc kubenswrapper[4821]: I0930 17:16:35.997250 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fzc4p"] Sep 30 17:16:36 crc kubenswrapper[4821]: I0930 17:16:36.714835 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" path="/var/lib/kubelet/pods/c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944/volumes" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.448608 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:16:43 crc kubenswrapper[4821]: E0930 17:16:43.449905 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="extract-utilities" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.449921 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="extract-utilities" Sep 30 17:16:43 crc kubenswrapper[4821]: E0930 17:16:43.449941 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="registry-server" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.449949 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="registry-server" Sep 30 17:16:43 crc kubenswrapper[4821]: E0930 17:16:43.449972 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="extract-content" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.449980 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="extract-content" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.450579 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59ddbb0-c6ee-4f6a-9fa1-1dc7aa45d944" containerName="registry-server" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.454389 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.473633 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.650158 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.650229 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fktd\" (UniqueName: \"kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.650293 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.752253 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.752348 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.752370 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fktd\" (UniqueName: \"kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.752767 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.752969 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:43 crc kubenswrapper[4821]: I0930 17:16:43.781763 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fktd\" (UniqueName: \"kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd\") pod \"community-operators-4jmt8\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:44 crc kubenswrapper[4821]: I0930 17:16:44.077564 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:44 crc kubenswrapper[4821]: I0930 17:16:44.511117 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:16:44 crc kubenswrapper[4821]: I0930 17:16:44.712470 4821 generic.go:334] "Generic (PLEG): container finished" podID="80a27e6c-0576-448d-b597-816707084e37" containerID="5227607b342377d33b1b0127f0786ce47a44bda555558ca8669aa7baa00558a4" exitCode=0 Sep 30 17:16:44 crc kubenswrapper[4821]: I0930 17:16:44.714213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerDied","Data":"5227607b342377d33b1b0127f0786ce47a44bda555558ca8669aa7baa00558a4"} Sep 30 17:16:44 crc kubenswrapper[4821]: I0930 17:16:44.714242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerStarted","Data":"0aa9eb8e74998cb01b6168f475ebd3d7baabb52d0e2657a599351f615a80eb8c"} Sep 30 17:16:45 crc kubenswrapper[4821]: I0930 17:16:45.719482 4821 generic.go:334] "Generic (PLEG): container finished" podID="80a27e6c-0576-448d-b597-816707084e37" containerID="611f89f77fecacc47ca169cf1ea937ce77023689829a539e1a22fe93b1c1e33d" exitCode=0 Sep 30 17:16:45 crc kubenswrapper[4821]: I0930 17:16:45.719543 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerDied","Data":"611f89f77fecacc47ca169cf1ea937ce77023689829a539e1a22fe93b1c1e33d"} Sep 30 17:16:46 crc kubenswrapper[4821]: I0930 17:16:46.742602 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerStarted","Data":"aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d"} Sep 30 17:16:46 crc kubenswrapper[4821]: I0930 17:16:46.774335 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jmt8" podStartSLOduration=2.337887048 podStartE2EDuration="3.774313452s" podCreationTimestamp="2025-09-30 17:16:43 +0000 UTC" firstStartedPulling="2025-09-30 17:16:44.713784203 +0000 UTC m=+800.618830147" lastFinishedPulling="2025-09-30 17:16:46.150210607 +0000 UTC m=+802.055256551" observedRunningTime="2025-09-30 17:16:46.770034975 +0000 UTC m=+802.675080919" watchObservedRunningTime="2025-09-30 17:16:46.774313452 +0000 UTC m=+802.679359396" Sep 30 17:16:48 crc kubenswrapper[4821]: I0930 17:16:48.852793 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:16:48 crc kubenswrapper[4821]: I0930 17:16:48.855191 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:48 crc kubenswrapper[4821]: I0930 17:16:48.875670 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.022412 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.022473 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.022505 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvklh\" (UniqueName: \"kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.123309 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.123373 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.123402 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvklh\" (UniqueName: \"kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.124010 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.124056 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.157897 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvklh\" (UniqueName: \"kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh\") pod \"redhat-marketplace-cszn6\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.172843 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.655568 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:16:49 crc kubenswrapper[4821]: I0930 17:16:49.761697 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerStarted","Data":"42ff2575b22b2872a880af626cb8c11a4440d92b40f0e9b10d48414e81bf8d6b"} Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.489624 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.490706 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.497258 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.498454 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.499343 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-c7r26" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.502370 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c2qr6" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.517299 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.518422 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.527145 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5sbbm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.530489 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.551470 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc99s\" (UniqueName: \"kubernetes.io/projected/8941a980-0eba-405b-b73a-0d99cf87d170-kube-api-access-kc99s\") pod \"designate-operator-controller-manager-84f4f7b77b-k8qkc\" (UID: \"8941a980-0eba-405b-b73a-0d99cf87d170\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.551529 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng78c\" (UniqueName: \"kubernetes.io/projected/f8fa53cb-09d0-4d60-8b2f-8114904df38c-kube-api-access-ng78c\") pod \"barbican-operator-controller-manager-6ff8b75857-7msm9\" (UID: \"f8fa53cb-09d0-4d60-8b2f-8114904df38c\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.551564 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhwc\" (UniqueName: \"kubernetes.io/projected/10efd7b7-19ec-41c1-871e-a44c8d0d8181-kube-api-access-gzhwc\") pod \"cinder-operator-controller-manager-644bddb6d8-p5svk\" (UID: \"10efd7b7-19ec-41c1-871e-a44c8d0d8181\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.553756 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.572287 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.584439 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.585688 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.586165 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.603328 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.608272 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-65k2v" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.621211 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2wpdj" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.637402 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.659696 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc99s\" (UniqueName: \"kubernetes.io/projected/8941a980-0eba-405b-b73a-0d99cf87d170-kube-api-access-kc99s\") pod \"designate-operator-controller-manager-84f4f7b77b-k8qkc\" (UID: \"8941a980-0eba-405b-b73a-0d99cf87d170\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.659753 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng78c\" (UniqueName: \"kubernetes.io/projected/f8fa53cb-09d0-4d60-8b2f-8114904df38c-kube-api-access-ng78c\") pod \"barbican-operator-controller-manager-6ff8b75857-7msm9\" (UID: \"f8fa53cb-09d0-4d60-8b2f-8114904df38c\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.659798 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhwc\" (UniqueName: \"kubernetes.io/projected/10efd7b7-19ec-41c1-871e-a44c8d0d8181-kube-api-access-gzhwc\") pod \"cinder-operator-controller-manager-644bddb6d8-p5svk\" (UID: \"10efd7b7-19ec-41c1-871e-a44c8d0d8181\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.659825 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7znr2\" (UniqueName: \"kubernetes.io/projected/d46bd2b3-81c8-4425-b7d3-0df63252f647-kube-api-access-7znr2\") pod \"heat-operator-controller-manager-5d889d78cf-l5jlc\" (UID: \"d46bd2b3-81c8-4425-b7d3-0df63252f647\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.660173 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgnk\" (UniqueName: \"kubernetes.io/projected/0f92490a-9edc-463e-afa8-35d5ff0fc449-kube-api-access-ptgnk\") pod \"glance-operator-controller-manager-84958c4d49-8zftm\" (UID: \"0f92490a-9edc-463e-afa8-35d5ff0fc449\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.662330 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.686155 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.687414 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.687503 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.692513 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4ztzp" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.695616 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.696752 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.708023 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.714146 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x6pn7" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.714880 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhwc\" (UniqueName: \"kubernetes.io/projected/10efd7b7-19ec-41c1-871e-a44c8d0d8181-kube-api-access-gzhwc\") pod \"cinder-operator-controller-manager-644bddb6d8-p5svk\" (UID: \"10efd7b7-19ec-41c1-871e-a44c8d0d8181\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.722426 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.723417 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.727445 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fc88d" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.728643 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc99s\" (UniqueName: \"kubernetes.io/projected/8941a980-0eba-405b-b73a-0d99cf87d170-kube-api-access-kc99s\") pod \"designate-operator-controller-manager-84f4f7b77b-k8qkc\" (UID: \"8941a980-0eba-405b-b73a-0d99cf87d170\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.731859 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng78c\" (UniqueName: \"kubernetes.io/projected/f8fa53cb-09d0-4d60-8b2f-8114904df38c-kube-api-access-ng78c\") pod \"barbican-operator-controller-manager-6ff8b75857-7msm9\" (UID: \"f8fa53cb-09d0-4d60-8b2f-8114904df38c\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.731938 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.733165 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.737262 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.745949 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-62f2l" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.746094 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.760150 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.760746 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.766989 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65fl\" (UniqueName: \"kubernetes.io/projected/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-kube-api-access-k65fl\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767024 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phst\" (UniqueName: \"kubernetes.io/projected/d2e266d9-b27d-4b28-a69c-15245c94e1eb-kube-api-access-8phst\") pod \"horizon-operator-controller-manager-9f4696d94-qvfwd\" (UID: \"d2e266d9-b27d-4b28-a69c-15245c94e1eb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767054 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mts28\" (UniqueName: \"kubernetes.io/projected/4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b-kube-api-access-mts28\") pod \"keystone-operator-controller-manager-5bd55b4bff-h5h8d\" (UID: \"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767095 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7znr2\" (UniqueName: \"kubernetes.io/projected/d46bd2b3-81c8-4425-b7d3-0df63252f647-kube-api-access-7znr2\") pod \"heat-operator-controller-manager-5d889d78cf-l5jlc\" (UID: \"d46bd2b3-81c8-4425-b7d3-0df63252f647\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767129 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5st6r\" (UniqueName: \"kubernetes.io/projected/483a7050-54fe-4ae7-bc69-55a4dff975f7-kube-api-access-5st6r\") pod \"ironic-operator-controller-manager-7975b88857-cxn9w\" (UID: \"483a7050-54fe-4ae7-bc69-55a4dff975f7\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767170 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.767195 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgnk\" (UniqueName: \"kubernetes.io/projected/0f92490a-9edc-463e-afa8-35d5ff0fc449-kube-api-access-ptgnk\") pod \"glance-operator-controller-manager-84958c4d49-8zftm\" (UID: \"0f92490a-9edc-463e-afa8-35d5ff0fc449\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.769991 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.795645 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z44pk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.797526 4821 generic.go:334] "Generic (PLEG): container finished" podID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerID="3390d0ff0fd4da53ae09eb91c5ce33826a604b9e2e60f0c7732ef2d82e5cfa3d" exitCode=0 Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.798300 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerDied","Data":"3390d0ff0fd4da53ae09eb91c5ce33826a604b9e2e60f0c7732ef2d82e5cfa3d"} Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.807473 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.816012 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgnk\" (UniqueName: \"kubernetes.io/projected/0f92490a-9edc-463e-afa8-35d5ff0fc449-kube-api-access-ptgnk\") pod \"glance-operator-controller-manager-84958c4d49-8zftm\" (UID: \"0f92490a-9edc-463e-afa8-35d5ff0fc449\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.817583 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.820984 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.829788 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7znr2\" (UniqueName: \"kubernetes.io/projected/d46bd2b3-81c8-4425-b7d3-0df63252f647-kube-api-access-7znr2\") pod \"heat-operator-controller-manager-5d889d78cf-l5jlc\" (UID: \"d46bd2b3-81c8-4425-b7d3-0df63252f647\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.840821 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.854220 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.855420 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.859849 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8dcgc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.861653 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868142 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65fl\" (UniqueName: \"kubernetes.io/projected/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-kube-api-access-k65fl\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868232 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phst\" (UniqueName: \"kubernetes.io/projected/d2e266d9-b27d-4b28-a69c-15245c94e1eb-kube-api-access-8phst\") pod \"horizon-operator-controller-manager-9f4696d94-qvfwd\" (UID: \"d2e266d9-b27d-4b28-a69c-15245c94e1eb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868260 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rm8l\" (UniqueName: \"kubernetes.io/projected/a8683557-33d9-4018-94eb-b65323379f05-kube-api-access-7rm8l\") pod \"manila-operator-controller-manager-6d68dbc695-dh2hf\" (UID: \"a8683557-33d9-4018-94eb-b65323379f05\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mts28\" (UniqueName: \"kubernetes.io/projected/4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b-kube-api-access-mts28\") pod \"keystone-operator-controller-manager-5bd55b4bff-h5h8d\" (UID: \"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.868331 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5st6r\" (UniqueName: \"kubernetes.io/projected/483a7050-54fe-4ae7-bc69-55a4dff975f7-kube-api-access-5st6r\") pod \"ironic-operator-controller-manager-7975b88857-cxn9w\" (UID: \"483a7050-54fe-4ae7-bc69-55a4dff975f7\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:16:50 crc kubenswrapper[4821]: E0930 17:16:50.868839 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:50 crc kubenswrapper[4821]: E0930 17:16:50.868886 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert podName:9aa0f9eb-c484-4503-8a83-1cce3d3034c4 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:51.368870898 +0000 UTC m=+807.273916842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert") pod "infra-operator-controller-manager-7d857cc749-xg92t" (UID: "9aa0f9eb-c484-4503-8a83-1cce3d3034c4") : secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.898979 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.899979 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.907355 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mts28\" (UniqueName: \"kubernetes.io/projected/4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b-kube-api-access-mts28\") pod \"keystone-operator-controller-manager-5bd55b4bff-h5h8d\" (UID: \"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.910009 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xpg85" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.923858 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65fl\" (UniqueName: \"kubernetes.io/projected/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-kube-api-access-k65fl\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.928149 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.931421 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.936040 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-6przq"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.939593 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5st6r\" (UniqueName: \"kubernetes.io/projected/483a7050-54fe-4ae7-bc69-55a4dff975f7-kube-api-access-5st6r\") pod \"ironic-operator-controller-manager-7975b88857-cxn9w\" (UID: \"483a7050-54fe-4ae7-bc69-55a4dff975f7\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.944369 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.944811 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phst\" (UniqueName: \"kubernetes.io/projected/d2e266d9-b27d-4b28-a69c-15245c94e1eb-kube-api-access-8phst\") pod \"horizon-operator-controller-manager-9f4696d94-qvfwd\" (UID: \"d2e266d9-b27d-4b28-a69c-15245c94e1eb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.951766 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-f2tsb" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.966379 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz"] Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.970464 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qzl\" (UniqueName: \"kubernetes.io/projected/a232fb81-f800-4266-b287-ba2d7be562b8-kube-api-access-d4qzl\") pod \"mariadb-operator-controller-manager-88c7-5cx4l\" (UID: \"a232fb81-f800-4266-b287-ba2d7be562b8\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.972452 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfsq7\" (UniqueName: \"kubernetes.io/projected/42826092-1d4a-4edd-b929-8ae464702936-kube-api-access-mfsq7\") pod \"neutron-operator-controller-manager-64d7b59854-qcxjz\" (UID: \"42826092-1d4a-4edd-b929-8ae464702936\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.972661 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7qt\" (UniqueName: \"kubernetes.io/projected/879eea6a-d132-4b52-a3ce-93a890f5275a-kube-api-access-bz7qt\") pod \"nova-operator-controller-manager-c7c776c96-6przq\" (UID: \"879eea6a-d132-4b52-a3ce-93a890f5275a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.972749 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rm8l\" (UniqueName: \"kubernetes.io/projected/a8683557-33d9-4018-94eb-b65323379f05-kube-api-access-7rm8l\") pod \"manila-operator-controller-manager-6d68dbc695-dh2hf\" (UID: \"a8683557-33d9-4018-94eb-b65323379f05\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:16:50 crc kubenswrapper[4821]: I0930 17:16:50.986385 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-6przq"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.014277 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.015224 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.028586 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fwk9g" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.038575 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.067666 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.068887 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rm8l\" (UniqueName: \"kubernetes.io/projected/a8683557-33d9-4018-94eb-b65323379f05-kube-api-access-7rm8l\") pod \"manila-operator-controller-manager-6d68dbc695-dh2hf\" (UID: \"a8683557-33d9-4018-94eb-b65323379f05\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.069598 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.074726 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.078215 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsq7\" (UniqueName: \"kubernetes.io/projected/42826092-1d4a-4edd-b929-8ae464702936-kube-api-access-mfsq7\") pod \"neutron-operator-controller-manager-64d7b59854-qcxjz\" (UID: \"42826092-1d4a-4edd-b929-8ae464702936\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.094350 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6lc\" (UniqueName: \"kubernetes.io/projected/0e9486f1-e0be-44d7-8789-af45165d2f81-kube-api-access-4h6lc\") pod \"octavia-operator-controller-manager-76fcc6dc7c-27fgr\" (UID: \"0e9486f1-e0be-44d7-8789-af45165d2f81\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.094666 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7qt\" (UniqueName: \"kubernetes.io/projected/879eea6a-d132-4b52-a3ce-93a890f5275a-kube-api-access-bz7qt\") pod \"nova-operator-controller-manager-c7c776c96-6przq\" (UID: \"879eea6a-d132-4b52-a3ce-93a890f5275a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.094912 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qzl\" (UniqueName: \"kubernetes.io/projected/a232fb81-f800-4266-b287-ba2d7be562b8-kube-api-access-d4qzl\") pod \"mariadb-operator-controller-manager-88c7-5cx4l\" (UID: \"a232fb81-f800-4266-b287-ba2d7be562b8\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.081551 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mgbkz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.136770 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7qt\" (UniqueName: \"kubernetes.io/projected/879eea6a-d132-4b52-a3ce-93a890f5275a-kube-api-access-bz7qt\") pod \"nova-operator-controller-manager-c7c776c96-6przq\" (UID: \"879eea6a-d132-4b52-a3ce-93a890f5275a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.138323 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.142219 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.143387 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.147217 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsq7\" (UniqueName: \"kubernetes.io/projected/42826092-1d4a-4edd-b929-8ae464702936-kube-api-access-mfsq7\") pod \"neutron-operator-controller-manager-64d7b59854-qcxjz\" (UID: \"42826092-1d4a-4edd-b929-8ae464702936\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.150726 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qzl\" (UniqueName: \"kubernetes.io/projected/a232fb81-f800-4266-b287-ba2d7be562b8-kube-api-access-d4qzl\") pod \"mariadb-operator-controller-manager-88c7-5cx4l\" (UID: \"a232fb81-f800-4266-b287-ba2d7be562b8\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.150744 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.154578 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.155344 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w42bb" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.156952 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.158007 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.176051 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.177124 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.189320 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-25csk" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.190164 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.190276 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r79wx" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.190394 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-87rpl" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.195178 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.220049 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.224071 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.234298 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.237437 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.237570 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6br\" (UniqueName: \"kubernetes.io/projected/f43d5417-95a3-4530-a722-cfb37a0caee7-kube-api-access-ct6br\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.237625 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfnw\" (UniqueName: \"kubernetes.io/projected/978128f9-1130-4524-b15e-97cebe35dbc5-kube-api-access-4qfnw\") pod \"placement-operator-controller-manager-589c58c6c-jnrfz\" (UID: \"978128f9-1130-4524-b15e-97cebe35dbc5\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.237677 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd2mn\" (UniqueName: \"kubernetes.io/projected/f21f3a23-f85f-44eb-83ea-77d7fe338689-kube-api-access-xd2mn\") pod \"swift-operator-controller-manager-bc7dc7bd9-dxc88\" (UID: \"f21f3a23-f85f-44eb-83ea-77d7fe338689\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.238220 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6lc\" (UniqueName: \"kubernetes.io/projected/0e9486f1-e0be-44d7-8789-af45165d2f81-kube-api-access-4h6lc\") pod \"octavia-operator-controller-manager-76fcc6dc7c-27fgr\" (UID: \"0e9486f1-e0be-44d7-8789-af45165d2f81\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.249695 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gh6qt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.262309 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.297265 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.305002 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6lc\" (UniqueName: \"kubernetes.io/projected/0e9486f1-e0be-44d7-8789-af45165d2f81-kube-api-access-4h6lc\") pod \"octavia-operator-controller-manager-76fcc6dc7c-27fgr\" (UID: \"0e9486f1-e0be-44d7-8789-af45165d2f81\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.315915 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.333068 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.383622 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4bn\" (UniqueName: \"kubernetes.io/projected/878ec077-3dfa-4498-989d-72f34f449923-kube-api-access-9f4bn\") pod \"test-operator-controller-manager-f66b554c6-jt7sz\" (UID: \"878ec077-3dfa-4498-989d-72f34f449923\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct6br\" (UniqueName: \"kubernetes.io/projected/f43d5417-95a3-4530-a722-cfb37a0caee7-kube-api-access-ct6br\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfnw\" (UniqueName: \"kubernetes.io/projected/978128f9-1130-4524-b15e-97cebe35dbc5-kube-api-access-4qfnw\") pod \"placement-operator-controller-manager-589c58c6c-jnrfz\" (UID: \"978128f9-1130-4524-b15e-97cebe35dbc5\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393789 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393823 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd2mn\" (UniqueName: \"kubernetes.io/projected/f21f3a23-f85f-44eb-83ea-77d7fe338689-kube-api-access-xd2mn\") pod \"swift-operator-controller-manager-bc7dc7bd9-dxc88\" (UID: \"f21f3a23-f85f-44eb-83ea-77d7fe338689\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc69l\" (UniqueName: \"kubernetes.io/projected/c39c49f5-6b1c-4961-9ee6-175732754086-kube-api-access-fc69l\") pod \"telemetry-operator-controller-manager-7bdb6cfb74-zrjwt\" (UID: \"c39c49f5-6b1c-4961-9ee6-175732754086\") " pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393932 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.393949 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdv7\" (UniqueName: \"kubernetes.io/projected/e83989d6-b6f2-40d9-add4-a332f4669966-kube-api-access-gkdv7\") pod \"ovn-operator-controller-manager-9976ff44c-4s74h\" (UID: \"e83989d6-b6f2-40d9-add4-a332f4669966\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.394539 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.394587 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert podName:9aa0f9eb-c484-4503-8a83-1cce3d3034c4 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:52.394573029 +0000 UTC m=+808.299618963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert") pod "infra-operator-controller-manager-7d857cc749-xg92t" (UID: "9aa0f9eb-c484-4503-8a83-1cce3d3034c4") : secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.395132 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.395177 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert podName:f43d5417-95a3-4530-a722-cfb37a0caee7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:51.895163154 +0000 UTC m=+807.800209098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert") pod "openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" (UID: "f43d5417-95a3-4530-a722-cfb37a0caee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.414385 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.426315 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd2mn\" (UniqueName: \"kubernetes.io/projected/f21f3a23-f85f-44eb-83ea-77d7fe338689-kube-api-access-xd2mn\") pod \"swift-operator-controller-manager-bc7dc7bd9-dxc88\" (UID: \"f21f3a23-f85f-44eb-83ea-77d7fe338689\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.450538 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct6br\" (UniqueName: \"kubernetes.io/projected/f43d5417-95a3-4530-a722-cfb37a0caee7-kube-api-access-ct6br\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.450599 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.463925 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfnw\" (UniqueName: \"kubernetes.io/projected/978128f9-1130-4524-b15e-97cebe35dbc5-kube-api-access-4qfnw\") pod \"placement-operator-controller-manager-589c58c6c-jnrfz\" (UID: \"978128f9-1130-4524-b15e-97cebe35dbc5\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.479219 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.486533 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.494784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc69l\" (UniqueName: \"kubernetes.io/projected/c39c49f5-6b1c-4961-9ee6-175732754086-kube-api-access-fc69l\") pod \"telemetry-operator-controller-manager-7bdb6cfb74-zrjwt\" (UID: \"c39c49f5-6b1c-4961-9ee6-175732754086\") " pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.494838 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdv7\" (UniqueName: \"kubernetes.io/projected/e83989d6-b6f2-40d9-add4-a332f4669966-kube-api-access-gkdv7\") pod \"ovn-operator-controller-manager-9976ff44c-4s74h\" (UID: \"e83989d6-b6f2-40d9-add4-a332f4669966\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.494879 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4bn\" (UniqueName: \"kubernetes.io/projected/878ec077-3dfa-4498-989d-72f34f449923-kube-api-access-9f4bn\") pod \"test-operator-controller-manager-f66b554c6-jt7sz\" (UID: \"878ec077-3dfa-4498-989d-72f34f449923\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.532157 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.533475 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.535979 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdv7\" (UniqueName: \"kubernetes.io/projected/e83989d6-b6f2-40d9-add4-a332f4669966-kube-api-access-gkdv7\") pod \"ovn-operator-controller-manager-9976ff44c-4s74h\" (UID: \"e83989d6-b6f2-40d9-add4-a332f4669966\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.538145 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-d9n5g" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.543129 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc69l\" (UniqueName: \"kubernetes.io/projected/c39c49f5-6b1c-4961-9ee6-175732754086-kube-api-access-fc69l\") pod \"telemetry-operator-controller-manager-7bdb6cfb74-zrjwt\" (UID: \"c39c49f5-6b1c-4961-9ee6-175732754086\") " pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.546011 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4bn\" (UniqueName: \"kubernetes.io/projected/878ec077-3dfa-4498-989d-72f34f449923-kube-api-access-9f4bn\") pod \"test-operator-controller-manager-f66b554c6-jt7sz\" (UID: \"878ec077-3dfa-4498-989d-72f34f449923\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.563703 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.565172 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.572206 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.581256 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.601781 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z28pc\" (UniqueName: \"kubernetes.io/projected/c71ada48-d571-4dc5-aa12-602adaa8bc94-kube-api-access-z28pc\") pod \"watcher-operator-controller-manager-76669f99c-dm2zc\" (UID: \"c71ada48-d571-4dc5-aa12-602adaa8bc94\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.609356 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.614483 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.625008 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.627821 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rqbv5" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.628345 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.640503 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.641429 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.651289 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q9kpf" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.701278 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.712238 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zb7p\" (UniqueName: \"kubernetes.io/projected/62c27bc7-995e-467d-8a66-9c26828da252-kube-api-access-7zb7p\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.712478 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdfm2\" (UniqueName: \"kubernetes.io/projected/ae386591-10fb-4e44-bd19-2c36cb821e7b-kube-api-access-hdfm2\") pod \"rabbitmq-cluster-operator-manager-79d8469568-wb5hw\" (UID: \"ae386591-10fb-4e44-bd19-2c36cb821e7b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.712598 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.712865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z28pc\" (UniqueName: \"kubernetes.io/projected/c71ada48-d571-4dc5-aa12-602adaa8bc94-kube-api-access-z28pc\") pod \"watcher-operator-controller-manager-76669f99c-dm2zc\" (UID: \"c71ada48-d571-4dc5-aa12-602adaa8bc94\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.767800 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.776632 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z28pc\" (UniqueName: \"kubernetes.io/projected/c71ada48-d571-4dc5-aa12-602adaa8bc94-kube-api-access-z28pc\") pod \"watcher-operator-controller-manager-76669f99c-dm2zc\" (UID: \"c71ada48-d571-4dc5-aa12-602adaa8bc94\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.792869 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.814775 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.814923 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zb7p\" (UniqueName: \"kubernetes.io/projected/62c27bc7-995e-467d-8a66-9c26828da252-kube-api-access-7zb7p\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.814979 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdfm2\" (UniqueName: \"kubernetes.io/projected/ae386591-10fb-4e44-bd19-2c36cb821e7b-kube-api-access-hdfm2\") pod \"rabbitmq-cluster-operator-manager-79d8469568-wb5hw\" (UID: \"ae386591-10fb-4e44-bd19-2c36cb821e7b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.815128 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.815191 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert podName:62c27bc7-995e-467d-8a66-9c26828da252 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:52.315173388 +0000 UTC m=+808.220219332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert") pod "openstack-operator-controller-manager-5468b64689-l2dt6" (UID: "62c27bc7-995e-467d-8a66-9c26828da252") : secret "webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.834546 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.836916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zb7p\" (UniqueName: \"kubernetes.io/projected/62c27bc7-995e-467d-8a66-9c26828da252-kube-api-access-7zb7p\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.858031 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.862100 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.873604 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdfm2\" (UniqueName: \"kubernetes.io/projected/ae386591-10fb-4e44-bd19-2c36cb821e7b-kube-api-access-hdfm2\") pod \"rabbitmq-cluster-operator-manager-79d8469568-wb5hw\" (UID: \"ae386591-10fb-4e44-bd19-2c36cb821e7b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.911604 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9"] Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.916725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.917218 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: E0930 17:16:51.917308 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert podName:f43d5417-95a3-4530-a722-cfb37a0caee7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:52.917269455 +0000 UTC m=+808.822315399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert") pod "openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" (UID: "f43d5417-95a3-4530-a722-cfb37a0caee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:16:51 crc kubenswrapper[4821]: I0930 17:16:51.985943 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.273185 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc"] Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.325876 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:52 crc kubenswrapper[4821]: E0930 17:16:52.326156 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Sep 30 17:16:52 crc kubenswrapper[4821]: E0930 17:16:52.326500 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert podName:62c27bc7-995e-467d-8a66-9c26828da252 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:53.32647739 +0000 UTC m=+809.231523334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert") pod "openstack-operator-controller-manager-5468b64689-l2dt6" (UID: "62c27bc7-995e-467d-8a66-9c26828da252") : secret "webhook-server-cert" not found Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.428227 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:52 crc kubenswrapper[4821]: E0930 17:16:52.428380 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:52 crc kubenswrapper[4821]: E0930 17:16:52.428423 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert podName:9aa0f9eb-c484-4503-8a83-1cce3d3034c4 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:54.428410262 +0000 UTC m=+810.333456206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert") pod "infra-operator-controller-manager-7d857cc749-xg92t" (UID: "9aa0f9eb-c484-4503-8a83-1cce3d3034c4") : secret "infra-operator-webhook-server-cert" not found Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.702362 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc"] Sep 30 17:16:52 crc kubenswrapper[4821]: W0930 17:16:52.725312 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46bd2b3_81c8_4425_b7d3_0df63252f647.slice/crio-5509fdc8d0263e13f204c19595d53d9c1b958975fde94a7c12f179be2faa872e WatchSource:0}: Error finding container 5509fdc8d0263e13f204c19595d53d9c1b958975fde94a7c12f179be2faa872e: Status 404 returned error can't find the container with id 5509fdc8d0263e13f204c19595d53d9c1b958975fde94a7c12f179be2faa872e Sep 30 17:16:52 crc kubenswrapper[4821]: W0930 17:16:52.730024 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod483a7050_54fe_4ae7_bc69_55a4dff975f7.slice/crio-a03e69fcbbecdb6192da54bd356a7bc41c38360806158e816f4925b50304b2ee WatchSource:0}: Error finding container a03e69fcbbecdb6192da54bd356a7bc41c38360806158e816f4925b50304b2ee: Status 404 returned error can't find the container with id a03e69fcbbecdb6192da54bd356a7bc41c38360806158e816f4925b50304b2ee Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.742195 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w"] Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.829879 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd"] Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.832883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" event={"ID":"d2e266d9-b27d-4b28-a69c-15245c94e1eb","Type":"ContainerStarted","Data":"04b679173a0890a663ba748c6b46d63a4127fcb4d67c52f86b24ef383c76fd5b"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.833687 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" event={"ID":"d46bd2b3-81c8-4425-b7d3-0df63252f647","Type":"ContainerStarted","Data":"5509fdc8d0263e13f204c19595d53d9c1b958975fde94a7c12f179be2faa872e"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.860878 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr"] Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.861292 4821 generic.go:334] "Generic (PLEG): container finished" podID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerID="a5d6c903208ded3bb708c8759507312c2dd1f89b32758e172021fec77aaa691e" exitCode=0 Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.861438 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerDied","Data":"a5d6c903208ded3bb708c8759507312c2dd1f89b32758e172021fec77aaa691e"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.873510 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm"] Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.900563 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" event={"ID":"8941a980-0eba-405b-b73a-0d99cf87d170","Type":"ContainerStarted","Data":"f48dbb51c61905f0c93afae0b8090174452e5ab058daf7298ee0dc826f1eb3c1"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.938892 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" event={"ID":"f8fa53cb-09d0-4d60-8b2f-8114904df38c","Type":"ContainerStarted","Data":"2b18abe3fbf449af5045c69df7e2c67ae45bb3d3a326b981c696a8aa2302af64"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.939880 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.956498 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" event={"ID":"483a7050-54fe-4ae7-bc69-55a4dff975f7","Type":"ContainerStarted","Data":"a03e69fcbbecdb6192da54bd356a7bc41c38360806158e816f4925b50304b2ee"} Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.988242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f43d5417-95a3-4530-a722-cfb37a0caee7-cert\") pod \"openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v\" (UID: \"f43d5417-95a3-4530-a722-cfb37a0caee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:52 crc kubenswrapper[4821]: I0930 17:16:52.996047 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" event={"ID":"10efd7b7-19ec-41c1-871e-a44c8d0d8181","Type":"ContainerStarted","Data":"694a7e975a59002e60614bf521b398ae478791ba3dbe5b33dcbd4a47041e9707"} Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.167796 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt"] Sep 30 17:16:53 crc kubenswrapper[4821]: W0930 17:16:53.188831 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc39c49f5_6b1c_4961_9ee6_175732754086.slice/crio-69cb4a3351b13974e878b8fc2d13cd475c8947565531e3a4050638905603b72c WatchSource:0}: Error finding container 69cb4a3351b13974e878b8fc2d13cd475c8947565531e3a4050638905603b72c: Status 404 returned error can't find the container with id 69cb4a3351b13974e878b8fc2d13cd475c8947565531e3a4050638905603b72c Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.214041 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.232769 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l"] Sep 30 17:16:53 crc kubenswrapper[4821]: W0930 17:16:53.263543 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda232fb81_f800_4266_b287_ba2d7be562b8.slice/crio-e73829e8b165ab8408bb1ba64fd170bb276415853571ebba2ebb9790ca92fc7e WatchSource:0}: Error finding container e73829e8b165ab8408bb1ba64fd170bb276415853571ebba2ebb9790ca92fc7e: Status 404 returned error can't find the container with id e73829e8b165ab8408bb1ba64fd170bb276415853571ebba2ebb9790ca92fc7e Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.285005 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.312981 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-6przq"] Sep 30 17:16:53 crc kubenswrapper[4821]: W0930 17:16:53.340333 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42826092_1d4a_4edd_b929_8ae464702936.slice/crio-4a3671365d1cc2afa37a54a7e99166dbf029456f20d025767b07f1a6aeca6217 WatchSource:0}: Error finding container 4a3671365d1cc2afa37a54a7e99166dbf029456f20d025767b07f1a6aeca6217: Status 404 returned error can't find the container with id 4a3671365d1cc2afa37a54a7e99166dbf029456f20d025767b07f1a6aeca6217 Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.349119 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.355825 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.357479 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c27bc7-995e-467d-8a66-9c26828da252-cert\") pod \"openstack-operator-controller-manager-5468b64689-l2dt6\" (UID: \"62c27bc7-995e-467d-8a66-9c26828da252\") " pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.380489 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf"] Sep 30 17:16:53 crc kubenswrapper[4821]: W0930 17:16:53.413476 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8683557_33d9_4018_94eb_b65323379f05.slice/crio-652d051dc52d513135e3fe6887dcfd56457916a8d2d8bfe80ed6fe65153e4471 WatchSource:0}: Error finding container 652d051dc52d513135e3fe6887dcfd56457916a8d2d8bfe80ed6fe65153e4471: Status 404 returned error can't find the container with id 652d051dc52d513135e3fe6887dcfd56457916a8d2d8bfe80ed6fe65153e4471 Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.473244 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.587602 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.604482 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.612950 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.635551 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h"] Sep 30 17:16:53 crc kubenswrapper[4821]: E0930 17:16:53.647529 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7169dfadf5f5589f14ca52700d2eba991c2a0c7733f6a1ea795752d993d7f61b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z28pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-76669f99c-dm2zc_openstack-operators(c71ada48-d571-4dc5-aa12-602adaa8bc94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:16:53 crc kubenswrapper[4821]: E0930 17:16:53.647689 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkdv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-9976ff44c-4s74h_openstack-operators(e83989d6-b6f2-40d9-add4-a332f4669966): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.658846 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz"] Sep 30 17:16:53 crc kubenswrapper[4821]: E0930 17:16:53.681246 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9f4bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-f66b554c6-jt7sz_openstack-operators(878ec077-3dfa-4498-989d-72f34f449923): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.682170 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88"] Sep 30 17:16:53 crc kubenswrapper[4821]: I0930 17:16:53.757208 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v"] Sep 30 17:16:53 crc kubenswrapper[4821]: E0930 17:16:53.760838 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xd2mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bc7dc7bd9-dxc88_openstack-operators(f21f3a23-f85f-44eb-83ea-77d7fe338689): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.014650 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" podUID="e83989d6-b6f2-40d9-add4-a332f4669966" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.038434 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" event={"ID":"e83989d6-b6f2-40d9-add4-a332f4669966","Type":"ContainerStarted","Data":"91dd757dbdbefc90b2615b21561578696a4f918d477ebf5d2c41566912a6a018"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.038481 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" event={"ID":"e83989d6-b6f2-40d9-add4-a332f4669966","Type":"ContainerStarted","Data":"106dba5c5b4df1d2f45dd76a0d5023aed56da840a98541ebbf9e73f30b71114c"} Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.043670 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" podUID="e83989d6-b6f2-40d9-add4-a332f4669966" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.045960 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" event={"ID":"f43d5417-95a3-4530-a722-cfb37a0caee7","Type":"ContainerStarted","Data":"5a5f122ed16b0941e84159328fd2ff41b54f2f22e8ed42426f9bb2566932afa2"} Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.051737 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" podUID="c71ada48-d571-4dc5-aa12-602adaa8bc94" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.067820 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" event={"ID":"c71ada48-d571-4dc5-aa12-602adaa8bc94","Type":"ContainerStarted","Data":"f1564891b531f690b890bd91cea12c4e149c680e6d27d025be0e9e87cf42f3f7"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.069662 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" event={"ID":"a8683557-33d9-4018-94eb-b65323379f05","Type":"ContainerStarted","Data":"652d051dc52d513135e3fe6887dcfd56457916a8d2d8bfe80ed6fe65153e4471"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.072194 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" event={"ID":"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b","Type":"ContainerStarted","Data":"2ece4ffa4ae852fd999f1a1bf7671ce23751bd109845308732ec1b8b6237d774"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.077710 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.078442 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.090477 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" podUID="878ec077-3dfa-4498-989d-72f34f449923" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.090804 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" event={"ID":"f21f3a23-f85f-44eb-83ea-77d7fe338689","Type":"ContainerStarted","Data":"9388b83315cd88ce5d8b4c84e5b978e492d7bd8be8264af8337878e8e5b66ad0"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.102354 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" event={"ID":"ae386591-10fb-4e44-bd19-2c36cb821e7b","Type":"ContainerStarted","Data":"ad67a42daf489f7a9866ad82ea00555acfd12e8e17f15b05edd528378bbcabda"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.114389 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" event={"ID":"a232fb81-f800-4266-b287-ba2d7be562b8","Type":"ContainerStarted","Data":"e73829e8b165ab8408bb1ba64fd170bb276415853571ebba2ebb9790ca92fc7e"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.129039 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" event={"ID":"0e9486f1-e0be-44d7-8789-af45165d2f81","Type":"ContainerStarted","Data":"947aa9d8772076d25610ed5ad2e35d15c4a455a7c5da9623d05894a85aa2c8d2"} Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.139925 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" podUID="f21f3a23-f85f-44eb-83ea-77d7fe338689" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.142697 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.160493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" event={"ID":"c39c49f5-6b1c-4961-9ee6-175732754086","Type":"ContainerStarted","Data":"69cb4a3351b13974e878b8fc2d13cd475c8947565531e3a4050638905603b72c"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.165238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" event={"ID":"879eea6a-d132-4b52-a3ce-93a890f5275a","Type":"ContainerStarted","Data":"bfe3e80d7f657a3e8d7cc25ab60ca5855ec818b842a02d4ba528f2163c9074a4"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.172290 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" event={"ID":"978128f9-1130-4524-b15e-97cebe35dbc5","Type":"ContainerStarted","Data":"9ab7a05fcd80a4e4958da91e1b56bc3a190670b48ade00b06a22b2ed88b77b8e"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.175689 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" event={"ID":"42826092-1d4a-4edd-b929-8ae464702936","Type":"ContainerStarted","Data":"4a3671365d1cc2afa37a54a7e99166dbf029456f20d025767b07f1a6aeca6217"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.179530 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" event={"ID":"0f92490a-9edc-463e-afa8-35d5ff0fc449","Type":"ContainerStarted","Data":"cc9aee458f94113f0037fd88dea01383078334c73d5b663ca0d276fe6a64a77c"} Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.185359 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cszn6" podStartSLOduration=3.449828068 podStartE2EDuration="6.1853426s" podCreationTimestamp="2025-09-30 17:16:48 +0000 UTC" firstStartedPulling="2025-09-30 17:16:50.816359129 +0000 UTC m=+806.721405073" lastFinishedPulling="2025-09-30 17:16:53.551873671 +0000 UTC m=+809.456919605" observedRunningTime="2025-09-30 17:16:54.182366856 +0000 UTC m=+810.087412800" watchObservedRunningTime="2025-09-30 17:16:54.1853426 +0000 UTC m=+810.090388534" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.190540 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" event={"ID":"878ec077-3dfa-4498-989d-72f34f449923","Type":"ContainerStarted","Data":"8f81a56eb3f7e5d2700914e19484b1941a4d65d72bcdbf6133c061f1c4537846"} Sep 30 17:16:54 crc kubenswrapper[4821]: E0930 17:16:54.210865 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80\\\"\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" podUID="878ec077-3dfa-4498-989d-72f34f449923" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.299967 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6"] Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.506886 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.515954 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9aa0f9eb-c484-4503-8a83-1cce3d3034c4-cert\") pod \"infra-operator-controller-manager-7d857cc749-xg92t\" (UID: \"9aa0f9eb-c484-4503-8a83-1cce3d3034c4\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:54 crc kubenswrapper[4821]: I0930 17:16:54.699689 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.280319 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" event={"ID":"62c27bc7-995e-467d-8a66-9c26828da252","Type":"ContainerStarted","Data":"10256c21d275cd7f6dc39d700debc7fc3150de4d0eea33864eba5a4849e3c985"} Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.281048 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" event={"ID":"62c27bc7-995e-467d-8a66-9c26828da252","Type":"ContainerStarted","Data":"7890ccd14a5d17b7d01e0f4d1ee2bd6a270a0a99b4c710b31141bccad9089cd9"} Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.281067 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" event={"ID":"62c27bc7-995e-467d-8a66-9c26828da252","Type":"ContainerStarted","Data":"c4b556d6a498dee2f11ac15341c54e023d7de1a9815833846515a2f3b5214b82"} Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.281688 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.318517 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" podStartSLOduration=4.31849929 podStartE2EDuration="4.31849929s" podCreationTimestamp="2025-09-30 17:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:16:55.315426653 +0000 UTC m=+811.220472607" watchObservedRunningTime="2025-09-30 17:16:55.31849929 +0000 UTC m=+811.223545234" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.336747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" event={"ID":"c71ada48-d571-4dc5-aa12-602adaa8bc94","Type":"ContainerStarted","Data":"7836cec225f1041f8018c36f84a6025b74d06817097ba319d13093d667fb4f99"} Sep 30 17:16:55 crc kubenswrapper[4821]: E0930 17:16:55.354824 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7169dfadf5f5589f14ca52700d2eba991c2a0c7733f6a1ea795752d993d7f61b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" podUID="c71ada48-d571-4dc5-aa12-602adaa8bc94" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.372963 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerStarted","Data":"1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a"} Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.384327 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" event={"ID":"878ec077-3dfa-4498-989d-72f34f449923","Type":"ContainerStarted","Data":"7a8ca5c6a7852e002aba0f9dae5bc315db7674ea8b0b2b5333b3821cb3db3ff2"} Sep 30 17:16:55 crc kubenswrapper[4821]: E0930 17:16:55.393271 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80\\\"\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" podUID="878ec077-3dfa-4498-989d-72f34f449923" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.404670 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" event={"ID":"f21f3a23-f85f-44eb-83ea-77d7fe338689","Type":"ContainerStarted","Data":"bd34feb411ff594c9d8dae49f990c4ac0cdf21102e3ae2f307272014060143cf"} Sep 30 17:16:55 crc kubenswrapper[4821]: E0930 17:16:55.417274 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" podUID="e83989d6-b6f2-40d9-add4-a332f4669966" Sep 30 17:16:55 crc kubenswrapper[4821]: E0930 17:16:55.431067 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" podUID="f21f3a23-f85f-44eb-83ea-77d7fe338689" Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.513498 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t"] Sep 30 17:16:55 crc kubenswrapper[4821]: I0930 17:16:55.539097 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.428251 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" event={"ID":"9aa0f9eb-c484-4503-8a83-1cce3d3034c4","Type":"ContainerStarted","Data":"17742f567d04adf00b477052a2f86587627554b86ca65a86c2bdde22321f53cb"} Sep 30 17:16:56 crc kubenswrapper[4821]: E0930 17:16:56.432576 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" podUID="f21f3a23-f85f-44eb-83ea-77d7fe338689" Sep 30 17:16:56 crc kubenswrapper[4821]: E0930 17:16:56.432592 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7169dfadf5f5589f14ca52700d2eba991c2a0c7733f6a1ea795752d993d7f61b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" podUID="c71ada48-d571-4dc5-aa12-602adaa8bc94" Sep 30 17:16:56 crc kubenswrapper[4821]: E0930 17:16:56.432688 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80\\\"\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" podUID="878ec077-3dfa-4498-989d-72f34f449923" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.653835 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.655833 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.666276 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.753357 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.753402 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.753463 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9zm\" (UniqueName: \"kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.854560 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.854602 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.854649 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9zm\" (UniqueName: \"kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.855534 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.856053 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.873855 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9zm\" (UniqueName: \"kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm\") pod \"redhat-operators-5sdkz\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:56 crc kubenswrapper[4821]: I0930 17:16:56.994455 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:16:57 crc kubenswrapper[4821]: I0930 17:16:57.630960 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:16:57 crc kubenswrapper[4821]: I0930 17:16:57.734335 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:16:58 crc kubenswrapper[4821]: I0930 17:16:58.450601 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4jmt8" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" containerID="cri-o://aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" gracePeriod=2 Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.174008 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.174272 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.226970 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.457342 4821 generic.go:334] "Generic (PLEG): container finished" podID="80a27e6c-0576-448d-b597-816707084e37" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" exitCode=0 Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.457513 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerDied","Data":"aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d"} Sep 30 17:16:59 crc kubenswrapper[4821]: I0930 17:16:59.496384 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:17:02 crc kubenswrapper[4821]: I0930 17:17:02.025808 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:17:02 crc kubenswrapper[4821]: I0930 17:17:02.026274 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cszn6" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" containerID="cri-o://1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" gracePeriod=2 Sep 30 17:17:03 crc kubenswrapper[4821]: I0930 17:17:03.479494 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5468b64689-l2dt6" Sep 30 17:17:03 crc kubenswrapper[4821]: I0930 17:17:03.486940 4821 generic.go:334] "Generic (PLEG): container finished" podID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" exitCode=0 Sep 30 17:17:03 crc kubenswrapper[4821]: I0930 17:17:03.486982 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerDied","Data":"1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a"} Sep 30 17:17:04 crc kubenswrapper[4821]: E0930 17:17:04.078901 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:04 crc kubenswrapper[4821]: E0930 17:17:04.079413 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:04 crc kubenswrapper[4821]: E0930 17:17:04.079792 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:04 crc kubenswrapper[4821]: E0930 17:17:04.079842 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4jmt8" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" Sep 30 17:17:07 crc kubenswrapper[4821]: W0930 17:17:07.343436 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3adf74d0_2217_455a_aa60_e744a5c92ff1.slice/crio-a15b9233d5b7670015d66e5e789741cf5190255f4d2df5022faac7a73d6dd239 WatchSource:0}: Error finding container a15b9233d5b7670015d66e5e789741cf5190255f4d2df5022faac7a73d6dd239: Status 404 returned error can't find the container with id a15b9233d5b7670015d66e5e789741cf5190255f4d2df5022faac7a73d6dd239 Sep 30 17:17:07 crc kubenswrapper[4821]: I0930 17:17:07.519199 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerStarted","Data":"a15b9233d5b7670015d66e5e789741cf5190255f4d2df5022faac7a73d6dd239"} Sep 30 17:17:09 crc kubenswrapper[4821]: E0930 17:17:09.174469 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:09 crc kubenswrapper[4821]: E0930 17:17:09.175375 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:09 crc kubenswrapper[4821]: E0930 17:17:09.176160 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:09 crc kubenswrapper[4821]: E0930 17:17:09.176197 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cszn6" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" Sep 30 17:17:10 crc kubenswrapper[4821]: E0930 17:17:10.095452 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:f6b935f67979298c3c263ad84d277e5cf26c0dbba3f85f255c1ec4d1d75241d2" Sep 30 17:17:10 crc kubenswrapper[4821]: E0930 17:17:10.095796 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:f6b935f67979298c3c263ad84d277e5cf26c0dbba3f85f255c1ec4d1d75241d2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-84f4f7b77b-k8qkc_openstack-operators(8941a980-0eba-405b-b73a-0d99cf87d170): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:10 crc kubenswrapper[4821]: E0930 17:17:10.570668 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:917e6dcc519277c46e42898bc9f0f066790fa7b9633fcde668cc8a68a547c13c" Sep 30 17:17:10 crc kubenswrapper[4821]: E0930 17:17:10.570817 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:917e6dcc519277c46e42898bc9f0f066790fa7b9633fcde668cc8a68a547c13c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7znr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5d889d78cf-l5jlc_openstack-operators(d46bd2b3-81c8-4425-b7d3-0df63252f647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:11 crc kubenswrapper[4821]: E0930 17:17:11.961631 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1e2c65f4331a2bb568d97fbcd02e3bca2627e133a794e1e4fd13368e86ce6bd1" Sep 30 17:17:11 crc kubenswrapper[4821]: E0930 17:17:11.961879 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1e2c65f4331a2bb568d97fbcd02e3bca2627e133a794e1e4fd13368e86ce6bd1,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzhwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-644bddb6d8-p5svk_openstack-operators(10efd7b7-19ec-41c1-871e-a44c8d0d8181): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:13 crc kubenswrapper[4821]: E0930 17:17:13.094803 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:485df5c7813cdf4cf21f48ec48c8e3e4962fee6a1ae4c64f7af127d5ab346a10" Sep 30 17:17:13 crc kubenswrapper[4821]: E0930 17:17:13.095020 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:485df5c7813cdf4cf21f48ec48c8e3e4962fee6a1ae4c64f7af127d5ab346a10,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfsq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64d7b59854-qcxjz_openstack-operators(42826092-1d4a-4edd-b929-8ae464702936): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:13 crc kubenswrapper[4821]: E0930 17:17:13.589158 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884" Sep 30 17:17:13 crc kubenswrapper[4821]: E0930 17:17:13.589340 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rm8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6d68dbc695-dh2hf_openstack-operators(a8683557-33d9-4018-94eb-b65323379f05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:14 crc kubenswrapper[4821]: E0930 17:17:14.079466 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:14 crc kubenswrapper[4821]: E0930 17:17:14.079978 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:14 crc kubenswrapper[4821]: E0930 17:17:14.080311 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:14 crc kubenswrapper[4821]: E0930 17:17:14.080341 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4jmt8" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" Sep 30 17:17:15 crc kubenswrapper[4821]: E0930 17:17:15.152350 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6" Sep 30 17:17:15 crc kubenswrapper[4821]: E0930 17:17:15.152790 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROCESSOR_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_LIGHTSPEED_IMAGE_URL_DEFAULT,Value:quay.io/openstack-lightspeed/rag-content:os-docs-2024.2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct6br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v_openstack-operators(f43d5417-95a3-4530-a722-cfb37a0caee7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:16 crc kubenswrapper[4821]: E0930 17:17:16.880203 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2" Sep 30 17:17:16 crc kubenswrapper[4821]: E0930 17:17:16.880790 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4qfnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-589c58c6c-jnrfz_openstack-operators(978128f9-1130-4524-b15e-97cebe35dbc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:17 crc kubenswrapper[4821]: E0930 17:17:17.385380 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f" Sep 30 17:17:17 crc kubenswrapper[4821]: E0930 17:17:17.385559 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k65fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-7d857cc749-xg92t_openstack-operators(9aa0f9eb-c484-4503-8a83-1cce3d3034c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.449733 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.573612 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fktd\" (UniqueName: \"kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd\") pod \"80a27e6c-0576-448d-b597-816707084e37\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.573648 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities\") pod \"80a27e6c-0576-448d-b597-816707084e37\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.573744 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content\") pod \"80a27e6c-0576-448d-b597-816707084e37\" (UID: \"80a27e6c-0576-448d-b597-816707084e37\") " Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.576371 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities" (OuterVolumeSpecName: "utilities") pod "80a27e6c-0576-448d-b597-816707084e37" (UID: "80a27e6c-0576-448d-b597-816707084e37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.595249 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd" (OuterVolumeSpecName: "kube-api-access-9fktd") pod "80a27e6c-0576-448d-b597-816707084e37" (UID: "80a27e6c-0576-448d-b597-816707084e37"). InnerVolumeSpecName "kube-api-access-9fktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.605694 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80a27e6c-0576-448d-b597-816707084e37" (UID: "80a27e6c-0576-448d-b597-816707084e37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.627967 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jmt8" event={"ID":"80a27e6c-0576-448d-b597-816707084e37","Type":"ContainerDied","Data":"0aa9eb8e74998cb01b6168f475ebd3d7baabb52d0e2657a599351f615a80eb8c"} Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.628029 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jmt8" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.628142 4821 scope.go:117] "RemoveContainer" containerID="aace720fdd007a793955b7eb213678f35c42774ccc87924a04ab86b06163a43d" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.662696 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.667935 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4jmt8"] Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.674860 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.674896 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fktd\" (UniqueName: \"kubernetes.io/projected/80a27e6c-0576-448d-b597-816707084e37-kube-api-access-9fktd\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:17 crc kubenswrapper[4821]: I0930 17:17:17.674907 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80a27e6c-0576-448d-b597-816707084e37-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:17 crc kubenswrapper[4821]: E0930 17:17:17.977574 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b" Sep 30 17:17:17 crc kubenswrapper[4821]: E0930 17:17:17.977775 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdfm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-79d8469568-wb5hw_openstack-operators(ae386591-10fb-4e44-bd19-2c36cb821e7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:17 crc kubenswrapper[4821]: E0930 17:17:17.981331 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" podUID="ae386591-10fb-4e44-bd19-2c36cb821e7b" Sep 30 17:17:18 crc kubenswrapper[4821]: E0930 17:17:18.636373 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" podUID="ae386591-10fb-4e44-bd19-2c36cb821e7b" Sep 30 17:17:18 crc kubenswrapper[4821]: I0930 17:17:18.715039 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a27e6c-0576-448d-b597-816707084e37" path="/var/lib/kubelet/pods/80a27e6c-0576-448d-b597-816707084e37/volumes" Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.180312 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.180983 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.181357 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.181385 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cszn6" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.685031 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.51:5001/openstack-k8s-operators/telemetry-operator:ac359d938872c47e1f3d7d8466b12f9d1f8a5236" Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.685117 4821 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.51:5001/openstack-k8s-operators/telemetry-operator:ac359d938872c47e1f3d7d8466b12f9d1f8a5236" Sep 30 17:17:19 crc kubenswrapper[4821]: E0930 17:17:19.685272 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.51:5001/openstack-k8s-operators/telemetry-operator:ac359d938872c47e1f3d7d8466b12f9d1f8a5236,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc69l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7bdb6cfb74-zrjwt_openstack-operators(c39c49f5-6b1c-4961-9ee6-175732754086): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.162367 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.208901 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvklh\" (UniqueName: \"kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh\") pod \"24b765cb-3f9e-4580-9af0-568941ffc6e4\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.208961 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities\") pod \"24b765cb-3f9e-4580-9af0-568941ffc6e4\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.209116 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content\") pod \"24b765cb-3f9e-4580-9af0-568941ffc6e4\" (UID: \"24b765cb-3f9e-4580-9af0-568941ffc6e4\") " Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.209707 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities" (OuterVolumeSpecName: "utilities") pod "24b765cb-3f9e-4580-9af0-568941ffc6e4" (UID: "24b765cb-3f9e-4580-9af0-568941ffc6e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.224378 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh" (OuterVolumeSpecName: "kube-api-access-vvklh") pod "24b765cb-3f9e-4580-9af0-568941ffc6e4" (UID: "24b765cb-3f9e-4580-9af0-568941ffc6e4"). InnerVolumeSpecName "kube-api-access-vvklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.225111 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24b765cb-3f9e-4580-9af0-568941ffc6e4" (UID: "24b765cb-3f9e-4580-9af0-568941ffc6e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.225347 4821 scope.go:117] "RemoveContainer" containerID="611f89f77fecacc47ca169cf1ea937ce77023689829a539e1a22fe93b1c1e33d" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.311345 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.311446 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvklh\" (UniqueName: \"kubernetes.io/projected/24b765cb-3f9e-4580-9af0-568941ffc6e4-kube-api-access-vvklh\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.311464 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b765cb-3f9e-4580-9af0-568941ffc6e4-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.410141 4821 scope.go:117] "RemoveContainer" containerID="5227607b342377d33b1b0127f0786ce47a44bda555558ca8669aa7baa00558a4" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.484465 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" podUID="10efd7b7-19ec-41c1-871e-a44c8d0d8181" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.532592 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" podUID="d46bd2b3-81c8-4425-b7d3-0df63252f647" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.606746 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" podUID="978128f9-1130-4524-b15e-97cebe35dbc5" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.622524 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" podUID="42826092-1d4a-4edd-b929-8ae464702936" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.649457 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" podUID="9aa0f9eb-c484-4503-8a83-1cce3d3034c4" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.658763 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" podUID="f43d5417-95a3-4530-a722-cfb37a0caee7" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.661972 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" event={"ID":"42826092-1d4a-4edd-b929-8ae464702936","Type":"ContainerStarted","Data":"e8c4fb0ae2e0ae6da7f9c826c5f3c7e41a41d52e1e573032ae64735b3496cb85"} Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.663857 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:485df5c7813cdf4cf21f48ec48c8e3e4962fee6a1ae4c64f7af127d5ab346a10\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" podUID="42826092-1d4a-4edd-b929-8ae464702936" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.664450 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cszn6" event={"ID":"24b765cb-3f9e-4580-9af0-568941ffc6e4","Type":"ContainerDied","Data":"42ff2575b22b2872a880af626cb8c11a4440d92b40f0e9b10d48414e81bf8d6b"} Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.664629 4821 scope.go:117] "RemoveContainer" containerID="1c1b1ea017df90823fff0deba80fecda77f3b95eef5a3b27e95a9978018c3a1a" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.664596 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cszn6" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.674801 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" event={"ID":"9aa0f9eb-c484-4503-8a83-1cce3d3034c4","Type":"ContainerStarted","Data":"7d06e611b13c43773ee26e5f42f841a7305bdd6347271357e91cd153da32fba9"} Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.680364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" podUID="9aa0f9eb-c484-4503-8a83-1cce3d3034c4" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.682723 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" event={"ID":"10efd7b7-19ec-41c1-871e-a44c8d0d8181","Type":"ContainerStarted","Data":"025e55753760c19c193a54a0defaaabd421fe191c63e3dd78e651836b0ab1895"} Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.699551 4821 scope.go:117] "RemoveContainer" containerID="a5d6c903208ded3bb708c8759507312c2dd1f89b32758e172021fec77aaa691e" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.700495 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" event={"ID":"f43d5417-95a3-4530-a722-cfb37a0caee7","Type":"ContainerStarted","Data":"8832035a3d4c2a87c46c599d0d213289e10c46e5dac14356f0c7192136798260"} Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.705822 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" podUID="f43d5417-95a3-4530-a722-cfb37a0caee7" Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.706017 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:1e2c65f4331a2bb568d97fbcd02e3bca2627e133a794e1e4fd13368e86ce6bd1\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" podUID="10efd7b7-19ec-41c1-871e-a44c8d0d8181" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.731530 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" event={"ID":"d46bd2b3-81c8-4425-b7d3-0df63252f647","Type":"ContainerStarted","Data":"3f0ef083a08ba28b3cb318c8b09aeaa52e5f9f874f712d7756c1b6f1b4f17d72"} Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.737468 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" event={"ID":"978128f9-1130-4524-b15e-97cebe35dbc5","Type":"ContainerStarted","Data":"afb3255d7ea6684e6c5cb734c6ae195480d4e0a07616cd18ef2a2c9332e14def"} Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.741211 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" podUID="978128f9-1130-4524-b15e-97cebe35dbc5" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.742108 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:17:20 crc kubenswrapper[4821]: E0930 17:17:20.742901 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" podUID="a8683557-33d9-4018-94eb-b65323379f05" Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.750981 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cszn6"] Sep 30 17:17:20 crc kubenswrapper[4821]: I0930 17:17:20.867656 4821 scope.go:117] "RemoveContainer" containerID="3390d0ff0fd4da53ae09eb91c5ce33826a604b9e2e60f0c7732ef2d82e5cfa3d" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.033941 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" podUID="8941a980-0eba-405b-b73a-0d99cf87d170" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.140742 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" podUID="c39c49f5-6b1c-4961-9ee6-175732754086" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.745365 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" event={"ID":"879eea6a-d132-4b52-a3ce-93a890f5275a","Type":"ContainerStarted","Data":"06930bc7c77bfdf387155ad4e39e005b77540e14e83bd1171c997bc9d34eaa79"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.746626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" event={"ID":"f8fa53cb-09d0-4d60-8b2f-8114904df38c","Type":"ContainerStarted","Data":"4f8acb704556e8ee72d5027d01018875101a78f8b91ae7ccfc02a08835e26d82"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.746650 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" event={"ID":"f8fa53cb-09d0-4d60-8b2f-8114904df38c","Type":"ContainerStarted","Data":"ce1de913dcf5dda29a111cc17eee0a98e729c9ce34582dd980481a4d3f1983e5"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.747481 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.748825 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" event={"ID":"483a7050-54fe-4ae7-bc69-55a4dff975f7","Type":"ContainerStarted","Data":"f59e057e9237362ec46447a9c324449808f827b32c72ec9daad0ca2cca200bbb"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.750267 4821 generic.go:334] "Generic (PLEG): container finished" podID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerID="9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926" exitCode=0 Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.750306 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerDied","Data":"9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.752653 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" event={"ID":"f21f3a23-f85f-44eb-83ea-77d7fe338689","Type":"ContainerStarted","Data":"cd9bbe0cfa3995a490ab12ae6c71358413484d6c3d0844479f4cfb4542021161"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.752995 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.754247 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" event={"ID":"e83989d6-b6f2-40d9-add4-a332f4669966","Type":"ContainerStarted","Data":"b98dbb25e179a07391ed51a02fab1aa5bd78008a7983fbdb96146d6af152016e"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.754571 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.755496 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" event={"ID":"8941a980-0eba-405b-b73a-0d99cf87d170","Type":"ContainerStarted","Data":"f42b2b47276daa9fb61898295481db87eb9cc27051972f9a7c3eeabcb291a85e"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.757606 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" event={"ID":"d2e266d9-b27d-4b28-a69c-15245c94e1eb","Type":"ContainerStarted","Data":"c5a34c0cf5b13286a4a602d55f1145be170762cc852d80f40c920d3ba8187501"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.762146 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" event={"ID":"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b","Type":"ContainerStarted","Data":"82dc4d53b9e5c39ca86cb0841015cb5dbbc61403ace108efc26ffc52233dcfdc"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.763514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" event={"ID":"0f92490a-9edc-463e-afa8-35d5ff0fc449","Type":"ContainerStarted","Data":"673484d83f3e30c25f6b6637d5031b45c52a0d83d6794ebc9298b63175406193"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.765679 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" event={"ID":"c39c49f5-6b1c-4961-9ee6-175732754086","Type":"ContainerStarted","Data":"33675748dbaa7bbe18e146b2a88a120a8e823fa82ac3e5f26a0e9670f5e77037"} Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.766565 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.51:5001/openstack-k8s-operators/telemetry-operator:ac359d938872c47e1f3d7d8466b12f9d1f8a5236\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" podUID="c39c49f5-6b1c-4961-9ee6-175732754086" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.770651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" event={"ID":"878ec077-3dfa-4498-989d-72f34f449923","Type":"ContainerStarted","Data":"805e9e05b77e1312b34a682cac13ea8cdb0ddc3e0eadf6ecff7469039c682789"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.771357 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.773037 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" event={"ID":"0e9486f1-e0be-44d7-8789-af45165d2f81","Type":"ContainerStarted","Data":"9cc6a8a43055d6a13c38e3a05c154f87c0e79de391803c991920ff7e9b3b04c8"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.774446 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" event={"ID":"c71ada48-d571-4dc5-aa12-602adaa8bc94","Type":"ContainerStarted","Data":"9ef2ddaf055bd9b54d5c9f552c90ea3e1358f0cbcf2a749cb66119e9de271b0e"} Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.774602 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.775673 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" event={"ID":"a8683557-33d9-4018-94eb-b65323379f05","Type":"ContainerStarted","Data":"dbc2a9fa54934d28fd4d06ee3f7dda001482ad905c8391e50582eb26365b528d"} Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.776407 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" podUID="a8683557-33d9-4018-94eb-b65323379f05" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.777182 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" event={"ID":"a232fb81-f800-4266-b287-ba2d7be562b8","Type":"ContainerStarted","Data":"58432306d47a26f68d825fe21b5fd38249563ef409a82444086fe1c261c2ab56"} Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.777916 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:485df5c7813cdf4cf21f48ec48c8e3e4962fee6a1ae4c64f7af127d5ab346a10\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" podUID="42826092-1d4a-4edd-b929-8ae464702936" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.778254 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:1e2c65f4331a2bb568d97fbcd02e3bca2627e133a794e1e4fd13368e86ce6bd1\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" podUID="10efd7b7-19ec-41c1-871e-a44c8d0d8181" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.778313 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" podUID="9aa0f9eb-c484-4503-8a83-1cce3d3034c4" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.779922 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" podUID="978128f9-1130-4524-b15e-97cebe35dbc5" Sep 30 17:17:21 crc kubenswrapper[4821]: E0930 17:17:21.779974 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" podUID="f43d5417-95a3-4530-a722-cfb37a0caee7" Sep 30 17:17:21 crc kubenswrapper[4821]: I0930 17:17:21.938530 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" podStartSLOduration=4.4726236329999995 podStartE2EDuration="31.938508545s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.063776018 +0000 UTC m=+807.968821962" lastFinishedPulling="2025-09-30 17:17:19.52966092 +0000 UTC m=+835.434706874" observedRunningTime="2025-09-30 17:17:21.899506773 +0000 UTC m=+837.804552717" watchObservedRunningTime="2025-09-30 17:17:21.938508545 +0000 UTC m=+837.843554489" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.269619 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" podStartSLOduration=4.622103752 podStartE2EDuration="31.269603363s" podCreationTimestamp="2025-09-30 17:16:51 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.64728719 +0000 UTC m=+809.552333134" lastFinishedPulling="2025-09-30 17:17:20.294786801 +0000 UTC m=+836.199832745" observedRunningTime="2025-09-30 17:17:22.140205675 +0000 UTC m=+838.045251619" watchObservedRunningTime="2025-09-30 17:17:22.269603363 +0000 UTC m=+838.174649307" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.489467 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" podStartSLOduration=4.9121322769999995 podStartE2EDuration="31.489449196s" podCreationTimestamp="2025-09-30 17:16:51 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.681116165 +0000 UTC m=+809.586162109" lastFinishedPulling="2025-09-30 17:17:20.258433084 +0000 UTC m=+836.163479028" observedRunningTime="2025-09-30 17:17:22.381146545 +0000 UTC m=+838.286192489" watchObservedRunningTime="2025-09-30 17:17:22.489449196 +0000 UTC m=+838.394495140" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.714711 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" path="/var/lib/kubelet/pods/24b765cb-3f9e-4580-9af0-568941ffc6e4/volumes" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.767662 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" podStartSLOduration=6.234097605 podStartE2EDuration="32.767644544s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.760706549 +0000 UTC m=+809.665752493" lastFinishedPulling="2025-09-30 17:17:20.294253488 +0000 UTC m=+836.199299432" observedRunningTime="2025-09-30 17:17:22.76230034 +0000 UTC m=+838.667346284" watchObservedRunningTime="2025-09-30 17:17:22.767644544 +0000 UTC m=+838.672690488" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.786514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" event={"ID":"879eea6a-d132-4b52-a3ce-93a890f5275a","Type":"ContainerStarted","Data":"d9d3fb4d7ff3a537e6ab831ba86ea318227f62c57f215e6a43a90a5f78097821"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.786623 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.797535 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" event={"ID":"483a7050-54fe-4ae7-bc69-55a4dff975f7","Type":"ContainerStarted","Data":"de27b25fa96d44b9f826b238d43ddd019e7aaefd679809f44f56cf1010685399"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.809671 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" event={"ID":"0e9486f1-e0be-44d7-8789-af45165d2f81","Type":"ContainerStarted","Data":"3e714385ccd3858b9de0f15169601d662d52f3ffc417683900bc3714a72a1cfb"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.811948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" event={"ID":"d2e266d9-b27d-4b28-a69c-15245c94e1eb","Type":"ContainerStarted","Data":"19f9209c7309115e8b035b6432db10769ca5eeb6b38f2f5755e6e84b4c064974"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.813807 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" event={"ID":"d46bd2b3-81c8-4425-b7d3-0df63252f647","Type":"ContainerStarted","Data":"ed5fa12cd075624c4742a9dcfda48ebb65e5e889056714e83044dd1cfd80ffce"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.815495 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" event={"ID":"0f92490a-9edc-463e-afa8-35d5ff0fc449","Type":"ContainerStarted","Data":"bf8df22c96661b9ea3fec9bfd5f26f3900c9d5e0d99cdb0518da8b8f670e9023"} Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.816897 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" event={"ID":"4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b","Type":"ContainerStarted","Data":"3cf5c7fbf32f441d6f4bb547bbf3aa36fd98bf3b4a19306a51eee209f9e36591"} Sep 30 17:17:22 crc kubenswrapper[4821]: E0930 17:17:22.818468 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.51:5001/openstack-k8s-operators/telemetry-operator:ac359d938872c47e1f3d7d8466b12f9d1f8a5236\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" podUID="c39c49f5-6b1c-4961-9ee6-175732754086" Sep 30 17:17:22 crc kubenswrapper[4821]: E0930 17:17:22.822249 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" podUID="a8683557-33d9-4018-94eb-b65323379f05" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.837485 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" podStartSLOduration=6.178774484 podStartE2EDuration="32.837467514s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.647605799 +0000 UTC m=+809.552651743" lastFinishedPulling="2025-09-30 17:17:20.306298829 +0000 UTC m=+836.211344773" observedRunningTime="2025-09-30 17:17:22.833848085 +0000 UTC m=+838.738894029" watchObservedRunningTime="2025-09-30 17:17:22.837467514 +0000 UTC m=+838.742513458" Sep 30 17:17:22 crc kubenswrapper[4821]: I0930 17:17:22.859527 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" podStartSLOduration=6.6839086519999995 podStartE2EDuration="32.859510674s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.352715354 +0000 UTC m=+809.257761298" lastFinishedPulling="2025-09-30 17:17:19.528317366 +0000 UTC m=+835.433363320" observedRunningTime="2025-09-30 17:17:22.857432813 +0000 UTC m=+838.762478757" watchObservedRunningTime="2025-09-30 17:17:22.859510674 +0000 UTC m=+838.764556618" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.826738 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" event={"ID":"a232fb81-f800-4266-b287-ba2d7be562b8","Type":"ContainerStarted","Data":"65db6e938210259e1efd04b4b02ae067106be3fac4d1ec50294a47ed883daabb"} Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.828351 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.828772 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.829247 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.829681 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.830178 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.830220 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.845311 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" podStartSLOduration=7.104187953 podStartE2EDuration="33.845292619s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.788862171 +0000 UTC m=+808.693908115" lastFinishedPulling="2025-09-30 17:17:19.529966837 +0000 UTC m=+835.435012781" observedRunningTime="2025-09-30 17:17:23.843864894 +0000 UTC m=+839.748910848" watchObservedRunningTime="2025-09-30 17:17:23.845292619 +0000 UTC m=+839.750338573" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.866945 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" podStartSLOduration=7.720508684 podStartE2EDuration="33.866926459s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.381270596 +0000 UTC m=+809.286316540" lastFinishedPulling="2025-09-30 17:17:19.527688371 +0000 UTC m=+835.432734315" observedRunningTime="2025-09-30 17:17:23.861707729 +0000 UTC m=+839.766753673" watchObservedRunningTime="2025-09-30 17:17:23.866926459 +0000 UTC m=+839.771972393" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.880933 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" podStartSLOduration=5.387365057 podStartE2EDuration="33.880915428s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.789721133 +0000 UTC m=+808.694767077" lastFinishedPulling="2025-09-30 17:17:21.283271514 +0000 UTC m=+837.188317448" observedRunningTime="2025-09-30 17:17:23.876869997 +0000 UTC m=+839.781915951" watchObservedRunningTime="2025-09-30 17:17:23.880915428 +0000 UTC m=+839.785961372" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.902975 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" podStartSLOduration=7.245295772 podStartE2EDuration="33.902952727s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.871645706 +0000 UTC m=+808.776691660" lastFinishedPulling="2025-09-30 17:17:19.529302671 +0000 UTC m=+835.434348615" observedRunningTime="2025-09-30 17:17:23.899166353 +0000 UTC m=+839.804212287" watchObservedRunningTime="2025-09-30 17:17:23.902952727 +0000 UTC m=+839.807998681" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.920043 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" podStartSLOduration=7.224932566 podStartE2EDuration="33.920021284s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.834618993 +0000 UTC m=+808.739664937" lastFinishedPulling="2025-09-30 17:17:19.529707711 +0000 UTC m=+835.434753655" observedRunningTime="2025-09-30 17:17:23.917277945 +0000 UTC m=+839.822323899" watchObservedRunningTime="2025-09-30 17:17:23.920021284 +0000 UTC m=+839.825067238" Sep 30 17:17:23 crc kubenswrapper[4821]: I0930 17:17:23.955287 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" podStartSLOduration=7.208863054 podStartE2EDuration="33.955270813s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.782870062 +0000 UTC m=+808.687916006" lastFinishedPulling="2025-09-30 17:17:19.529277821 +0000 UTC m=+835.434323765" observedRunningTime="2025-09-30 17:17:23.949474148 +0000 UTC m=+839.854520112" watchObservedRunningTime="2025-09-30 17:17:23.955270813 +0000 UTC m=+839.860316757" Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.835059 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerStarted","Data":"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0"} Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.837671 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" event={"ID":"8941a980-0eba-405b-b73a-0d99cf87d170","Type":"ContainerStarted","Data":"946e19113d9277db12d88208b4a98aaf1cc15e4baec31c8100dbb1b490a8f3b6"} Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.837714 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.838328 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.879027 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" podStartSLOduration=8.630424499 podStartE2EDuration="34.879010081s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.277911709 +0000 UTC m=+809.182957653" lastFinishedPulling="2025-09-30 17:17:19.526497291 +0000 UTC m=+835.431543235" observedRunningTime="2025-09-30 17:17:24.878546578 +0000 UTC m=+840.783592522" watchObservedRunningTime="2025-09-30 17:17:24.879010081 +0000 UTC m=+840.784056025" Sep 30 17:17:24 crc kubenswrapper[4821]: I0930 17:17:24.913328 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" podStartSLOduration=3.2739614599999998 podStartE2EDuration="34.913310146s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.294600345 +0000 UTC m=+808.199646289" lastFinishedPulling="2025-09-30 17:17:23.933949021 +0000 UTC m=+839.838994975" observedRunningTime="2025-09-30 17:17:24.906859104 +0000 UTC m=+840.811905058" watchObservedRunningTime="2025-09-30 17:17:24.913310146 +0000 UTC m=+840.818356100" Sep 30 17:17:26 crc kubenswrapper[4821]: I0930 17:17:26.849329 4821 generic.go:334] "Generic (PLEG): container finished" podID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerID="ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0" exitCode=0 Sep 30 17:17:26 crc kubenswrapper[4821]: I0930 17:17:26.849383 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerDied","Data":"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0"} Sep 30 17:17:27 crc kubenswrapper[4821]: I0930 17:17:27.859185 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerStarted","Data":"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194"} Sep 30 17:17:27 crc kubenswrapper[4821]: I0930 17:17:27.879122 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5sdkz" podStartSLOduration=26.212691744 podStartE2EDuration="31.879103222s" podCreationTimestamp="2025-09-30 17:16:56 +0000 UTC" firstStartedPulling="2025-09-30 17:17:21.75186413 +0000 UTC m=+837.656910074" lastFinishedPulling="2025-09-30 17:17:27.418275608 +0000 UTC m=+843.323321552" observedRunningTime="2025-09-30 17:17:27.874502868 +0000 UTC m=+843.779548812" watchObservedRunningTime="2025-09-30 17:17:27.879103222 +0000 UTC m=+843.784149166" Sep 30 17:17:30 crc kubenswrapper[4821]: I0930 17:17:30.810465 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-7msm9" Sep 30 17:17:30 crc kubenswrapper[4821]: I0930 17:17:30.844220 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-k8qkc" Sep 30 17:17:30 crc kubenswrapper[4821]: I0930 17:17:30.938725 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-l5jlc" Sep 30 17:17:30 crc kubenswrapper[4821]: I0930 17:17:30.939060 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-8zftm" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.070376 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-qvfwd" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.197773 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-h5h8d" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.232182 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-cxn9w" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.300460 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-88c7-5cx4l" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.335701 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-6przq" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.386309 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-27fgr" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.584268 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-dxc88" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.770128 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-4s74h" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.837663 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-f66b554c6-jt7sz" Sep 30 17:17:31 crc kubenswrapper[4821]: I0930 17:17:31.865328 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-dm2zc" Sep 30 17:17:33 crc kubenswrapper[4821]: I0930 17:17:33.905540 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" event={"ID":"ae386591-10fb-4e44-bd19-2c36cb821e7b","Type":"ContainerStarted","Data":"f150467b908df3dd0272a4392880eba841c0114a0b18f4353ddfe4fbd751bc60"} Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.915294 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" event={"ID":"978128f9-1130-4524-b15e-97cebe35dbc5","Type":"ContainerStarted","Data":"78af1157d83ebe196a91589e28711cb633ab71b0b9b7ffab36e26e76ce3c925a"} Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.916275 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.916919 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" event={"ID":"42826092-1d4a-4edd-b929-8ae464702936","Type":"ContainerStarted","Data":"362d8f2c65f5e8edaea1647b96f8ae19bef500444121fb9d0222b01122f964a5"} Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.917124 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.918246 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" event={"ID":"a8683557-33d9-4018-94eb-b65323379f05","Type":"ContainerStarted","Data":"7a7ff9fb8d53b2e7513b5a497da55cfe7b724d2091f74b636f263557982cdba2"} Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.918397 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.933796 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" podStartSLOduration=4.44888536 podStartE2EDuration="44.933775826s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.646643775 +0000 UTC m=+809.551689719" lastFinishedPulling="2025-09-30 17:17:34.131534241 +0000 UTC m=+850.036580185" observedRunningTime="2025-09-30 17:17:34.929419278 +0000 UTC m=+850.834465222" watchObservedRunningTime="2025-09-30 17:17:34.933775826 +0000 UTC m=+850.838821770" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.934422 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-wb5hw" podStartSLOduration=4.443123466 podStartE2EDuration="43.934415342s" podCreationTimestamp="2025-09-30 17:16:51 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.646378988 +0000 UTC m=+809.551424932" lastFinishedPulling="2025-09-30 17:17:33.137670854 +0000 UTC m=+849.042716808" observedRunningTime="2025-09-30 17:17:33.926173948 +0000 UTC m=+849.831219892" watchObservedRunningTime="2025-09-30 17:17:34.934415342 +0000 UTC m=+850.839461286" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.949780 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" podStartSLOduration=4.24851157 podStartE2EDuration="44.949764813s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.431668953 +0000 UTC m=+809.336714887" lastFinishedPulling="2025-09-30 17:17:34.132922186 +0000 UTC m=+850.037968130" observedRunningTime="2025-09-30 17:17:34.945420185 +0000 UTC m=+850.850466129" watchObservedRunningTime="2025-09-30 17:17:34.949764813 +0000 UTC m=+850.854810757" Sep 30 17:17:34 crc kubenswrapper[4821]: I0930 17:17:34.966440 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" podStartSLOduration=4.050155774 podStartE2EDuration="44.966426297s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.352388536 +0000 UTC m=+809.257434480" lastFinishedPulling="2025-09-30 17:17:34.268659059 +0000 UTC m=+850.173705003" observedRunningTime="2025-09-30 17:17:34.961589528 +0000 UTC m=+850.866635472" watchObservedRunningTime="2025-09-30 17:17:34.966426297 +0000 UTC m=+850.871472241" Sep 30 17:17:35 crc kubenswrapper[4821]: I0930 17:17:35.926453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" event={"ID":"10efd7b7-19ec-41c1-871e-a44c8d0d8181","Type":"ContainerStarted","Data":"95745a31d9fa2ecdb84073a9c66b3252d781626085143f1c86de50da083571dc"} Sep 30 17:17:35 crc kubenswrapper[4821]: I0930 17:17:35.926938 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:17:35 crc kubenswrapper[4821]: I0930 17:17:35.929478 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" event={"ID":"f43d5417-95a3-4530-a722-cfb37a0caee7","Type":"ContainerStarted","Data":"9fe2f93fd3656c0c91f05b0d4ffb2bc9007ffebd06ff4c319cba992a028d64a8"} Sep 30 17:17:35 crc kubenswrapper[4821]: I0930 17:17:35.943387 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" podStartSLOduration=2.615390376 podStartE2EDuration="45.943373504s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:52.063859211 +0000 UTC m=+807.968905155" lastFinishedPulling="2025-09-30 17:17:35.391842339 +0000 UTC m=+851.296888283" observedRunningTime="2025-09-30 17:17:35.941954039 +0000 UTC m=+851.846999993" watchObservedRunningTime="2025-09-30 17:17:35.943373504 +0000 UTC m=+851.848419448" Sep 30 17:17:35 crc kubenswrapper[4821]: I0930 17:17:35.968535 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" podStartSLOduration=4.5980466700000004 podStartE2EDuration="45.968516219s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.804121521 +0000 UTC m=+809.709167465" lastFinishedPulling="2025-09-30 17:17:35.17459107 +0000 UTC m=+851.079637014" observedRunningTime="2025-09-30 17:17:35.96535298 +0000 UTC m=+851.870398924" watchObservedRunningTime="2025-09-30 17:17:35.968516219 +0000 UTC m=+851.873562183" Sep 30 17:17:36 crc kubenswrapper[4821]: I0930 17:17:36.936182 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" event={"ID":"c39c49f5-6b1c-4961-9ee6-175732754086","Type":"ContainerStarted","Data":"45ea8e3d8ddb7f1dbce9f26b47bf28b0c23296f0f476f9518a675f0fa57654d3"} Sep 30 17:17:36 crc kubenswrapper[4821]: I0930 17:17:36.953580 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" podStartSLOduration=2.390432557 podStartE2EDuration="45.953564946s" podCreationTimestamp="2025-09-30 17:16:51 +0000 UTC" firstStartedPulling="2025-09-30 17:16:53.202563779 +0000 UTC m=+809.107609723" lastFinishedPulling="2025-09-30 17:17:36.765696168 +0000 UTC m=+852.670742112" observedRunningTime="2025-09-30 17:17:36.948451089 +0000 UTC m=+852.853497033" watchObservedRunningTime="2025-09-30 17:17:36.953564946 +0000 UTC m=+852.858610890" Sep 30 17:17:36 crc kubenswrapper[4821]: I0930 17:17:36.996063 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:36 crc kubenswrapper[4821]: I0930 17:17:36.996125 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:37 crc kubenswrapper[4821]: I0930 17:17:37.044439 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:37 crc kubenswrapper[4821]: I0930 17:17:37.943791 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" event={"ID":"9aa0f9eb-c484-4503-8a83-1cce3d3034c4","Type":"ContainerStarted","Data":"99550f1642158685bff326558c33e4038565f54a6aff8660f88e3bd419f8bd68"} Sep 30 17:17:37 crc kubenswrapper[4821]: I0930 17:17:37.944181 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:17:37 crc kubenswrapper[4821]: I0930 17:17:37.960893 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" podStartSLOduration=6.283775013 podStartE2EDuration="47.960874407s" podCreationTimestamp="2025-09-30 17:16:50 +0000 UTC" firstStartedPulling="2025-09-30 17:16:55.599889458 +0000 UTC m=+811.504935402" lastFinishedPulling="2025-09-30 17:17:37.276988852 +0000 UTC m=+853.182034796" observedRunningTime="2025-09-30 17:17:37.958336894 +0000 UTC m=+853.863382878" watchObservedRunningTime="2025-09-30 17:17:37.960874407 +0000 UTC m=+853.865920351" Sep 30 17:17:37 crc kubenswrapper[4821]: I0930 17:17:37.984632 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:38 crc kubenswrapper[4821]: I0930 17:17:38.030666 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:17:39 crc kubenswrapper[4821]: I0930 17:17:39.954734 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5sdkz" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="registry-server" containerID="cri-o://42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194" gracePeriod=2 Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.328806 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.423157 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content\") pod \"3adf74d0-2217-455a-aa60-e744a5c92ff1\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.423489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9zm\" (UniqueName: \"kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm\") pod \"3adf74d0-2217-455a-aa60-e744a5c92ff1\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.424047 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities\") pod \"3adf74d0-2217-455a-aa60-e744a5c92ff1\" (UID: \"3adf74d0-2217-455a-aa60-e744a5c92ff1\") " Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.425037 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities" (OuterVolumeSpecName: "utilities") pod "3adf74d0-2217-455a-aa60-e744a5c92ff1" (UID: "3adf74d0-2217-455a-aa60-e744a5c92ff1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.432376 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm" (OuterVolumeSpecName: "kube-api-access-9z9zm") pod "3adf74d0-2217-455a-aa60-e744a5c92ff1" (UID: "3adf74d0-2217-455a-aa60-e744a5c92ff1"). InnerVolumeSpecName "kube-api-access-9z9zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.491199 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3adf74d0-2217-455a-aa60-e744a5c92ff1" (UID: "3adf74d0-2217-455a-aa60-e744a5c92ff1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.526029 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.526062 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9zm\" (UniqueName: \"kubernetes.io/projected/3adf74d0-2217-455a-aa60-e744a5c92ff1-kube-api-access-9z9zm\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.526072 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3adf74d0-2217-455a-aa60-e744a5c92ff1-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.826941 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-p5svk" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.963620 4821 generic.go:334] "Generic (PLEG): container finished" podID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerID="42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194" exitCode=0 Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.963698 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sdkz" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.965256 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerDied","Data":"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194"} Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.965358 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sdkz" event={"ID":"3adf74d0-2217-455a-aa60-e744a5c92ff1","Type":"ContainerDied","Data":"a15b9233d5b7670015d66e5e789741cf5190255f4d2df5022faac7a73d6dd239"} Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.965476 4821 scope.go:117] "RemoveContainer" containerID="42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.980414 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.992857 4821 scope.go:117] "RemoveContainer" containerID="ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0" Sep 30 17:17:40 crc kubenswrapper[4821]: I0930 17:17:40.995424 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5sdkz"] Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.017221 4821 scope.go:117] "RemoveContainer" containerID="9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.041382 4821 scope.go:117] "RemoveContainer" containerID="42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194" Sep 30 17:17:41 crc kubenswrapper[4821]: E0930 17:17:41.042156 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194\": container with ID starting with 42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194 not found: ID does not exist" containerID="42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.042204 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194"} err="failed to get container status \"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194\": rpc error: code = NotFound desc = could not find container \"42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194\": container with ID starting with 42f7c7f2a9ecc456ba01a45b09f5280a4c3f37e7e77dae1464c4e1296558e194 not found: ID does not exist" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.042234 4821 scope.go:117] "RemoveContainer" containerID="ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0" Sep 30 17:17:41 crc kubenswrapper[4821]: E0930 17:17:41.042651 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0\": container with ID starting with ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0 not found: ID does not exist" containerID="ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.042745 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0"} err="failed to get container status \"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0\": rpc error: code = NotFound desc = could not find container \"ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0\": container with ID starting with ab3ddb665261d990d0a11d10c6d2cb78c93bbce1b3c1e7ef4aa26b98998cb6e0 not found: ID does not exist" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.042816 4821 scope.go:117] "RemoveContainer" containerID="9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926" Sep 30 17:17:41 crc kubenswrapper[4821]: E0930 17:17:41.043135 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926\": container with ID starting with 9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926 not found: ID does not exist" containerID="9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.043290 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926"} err="failed to get container status \"9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926\": rpc error: code = NotFound desc = could not find container \"9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926\": container with ID starting with 9f913cd5586354fa93511ca0e1ada829e4d7fd9573e589bd3e25dea57711a926 not found: ID does not exist" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.268431 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-dh2hf" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.322180 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-qcxjz" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.567953 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-jnrfz" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.794484 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:17:41 crc kubenswrapper[4821]: I0930 17:17:41.796658 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7bdb6cfb74-zrjwt" Sep 30 17:17:42 crc kubenswrapper[4821]: I0930 17:17:42.715478 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" path="/var/lib/kubelet/pods/3adf74d0-2217-455a-aa60-e744a5c92ff1/volumes" Sep 30 17:17:43 crc kubenswrapper[4821]: I0930 17:17:43.215587 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:17:43 crc kubenswrapper[4821]: I0930 17:17:43.221260 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v" Sep 30 17:17:44 crc kubenswrapper[4821]: I0930 17:17:44.706151 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-xg92t" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.192303 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.192925 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.192938 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.192961 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.192967 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.192980 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.192988 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.192997 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193004 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.193028 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193033 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.193053 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193058 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.193071 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193100 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.193107 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193113 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="extract-content" Sep 30 17:18:04 crc kubenswrapper[4821]: E0930 17:18:04.193120 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193132 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="extract-utilities" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193263 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3adf74d0-2217-455a-aa60-e744a5c92ff1" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193279 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b765cb-3f9e-4580-9af0-568941ffc6e4" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193291 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a27e6c-0576-448d-b597-816707084e37" containerName="registry-server" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.193993 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.197329 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.197488 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-74npt" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.197509 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.197663 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.241116 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.241406 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqxd\" (UniqueName: \"kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.254539 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.277627 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.278952 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.280688 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.298375 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.342426 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qlf\" (UniqueName: \"kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.342484 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.342551 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqxd\" (UniqueName: \"kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.342601 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.342642 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.343416 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.386582 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqxd\" (UniqueName: \"kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd\") pod \"dnsmasq-dns-675f4bcbfc-xrzbj\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.443928 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.443994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qlf\" (UniqueName: \"kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.444031 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.444946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.445025 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.466887 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qlf\" (UniqueName: \"kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf\") pod \"dnsmasq-dns-78dd6ddcc-zprcc\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.512102 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.594821 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.840933 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:04 crc kubenswrapper[4821]: W0930 17:18:04.847232 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35fed46b_6384_49a9_9c1e_b6bae06ad8d5.slice/crio-da299024d4a6d79d0f3c4d8cdb5c0d57c1bcff2a9231d5be9ddafa69297f184d WatchSource:0}: Error finding container da299024d4a6d79d0f3c4d8cdb5c0d57c1bcff2a9231d5be9ddafa69297f184d: Status 404 returned error can't find the container with id da299024d4a6d79d0f3c4d8cdb5c0d57c1bcff2a9231d5be9ddafa69297f184d Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.857457 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:18:04 crc kubenswrapper[4821]: I0930 17:18:04.889804 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:04 crc kubenswrapper[4821]: W0930 17:18:04.898898 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9abcd4e2_a109_467e_8b5f_ea49994d758f.slice/crio-c5c22df526d51e632f3e83a88609f28f6233aa57181748f1f081a5e0636f9521 WatchSource:0}: Error finding container c5c22df526d51e632f3e83a88609f28f6233aa57181748f1f081a5e0636f9521: Status 404 returned error can't find the container with id c5c22df526d51e632f3e83a88609f28f6233aa57181748f1f081a5e0636f9521 Sep 30 17:18:05 crc kubenswrapper[4821]: I0930 17:18:05.116024 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" event={"ID":"35fed46b-6384-49a9-9c1e-b6bae06ad8d5","Type":"ContainerStarted","Data":"da299024d4a6d79d0f3c4d8cdb5c0d57c1bcff2a9231d5be9ddafa69297f184d"} Sep 30 17:18:05 crc kubenswrapper[4821]: I0930 17:18:05.118189 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" event={"ID":"9abcd4e2-a109-467e-8b5f-ea49994d758f","Type":"ContainerStarted","Data":"c5c22df526d51e632f3e83a88609f28f6233aa57181748f1f081a5e0636f9521"} Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.401341 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.439416 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.440579 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.465146 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.504744 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.504795 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hnt6\" (UniqueName: \"kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.504969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.637104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.637169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.637186 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hnt6\" (UniqueName: \"kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.638424 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.638691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.670387 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hnt6\" (UniqueName: \"kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6\") pod \"dnsmasq-dns-666b6646f7-rbz4b\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.754074 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.774007 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.795170 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.796274 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.814377 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.849495 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwq7\" (UniqueName: \"kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.849568 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.849607 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.953443 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwq7\" (UniqueName: \"kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.953791 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.953839 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.955107 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.955719 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:07 crc kubenswrapper[4821]: I0930 17:18:07.981875 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwq7\" (UniqueName: \"kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7\") pod \"dnsmasq-dns-57d769cc4f-6g8kh\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.184047 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.297577 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.590427 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.598899 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.603604 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.603877 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.604018 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.604044 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.604214 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.604612 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.609555 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fvk5h" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.609746 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.648229 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666122 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666169 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666194 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666219 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666245 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666268 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c950f02-8f72-4d89-af10-660187db2344-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666427 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl9q\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-kube-api-access-2zl9q\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666459 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666481 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666502 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c950f02-8f72-4d89-af10-660187db2344-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.666530 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: W0930 17:18:08.670506 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod255305c2_9daa_4299_8c3c_99bf312b7cd7.slice/crio-8395865730c92cee323556232f075c877e3bfa9f750c2ab67976146b62d95294 WatchSource:0}: Error finding container 8395865730c92cee323556232f075c877e3bfa9f750c2ab67976146b62d95294: Status 404 returned error can't find the container with id 8395865730c92cee323556232f075c877e3bfa9f750c2ab67976146b62d95294 Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768031 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768110 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768162 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768186 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c950f02-8f72-4d89-af10-660187db2344-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl9q\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-kube-api-access-2zl9q\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768236 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768264 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768289 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c950f02-8f72-4d89-af10-660187db2344-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768331 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768387 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768422 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.768501 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.770022 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.770927 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.771404 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.772127 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c950f02-8f72-4d89-af10-660187db2344-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.772458 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.774690 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.775674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.783874 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c950f02-8f72-4d89-af10-660187db2344-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.784340 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c950f02-8f72-4d89-af10-660187db2344-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.790151 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl9q\" (UniqueName: \"kubernetes.io/projected/3c950f02-8f72-4d89-af10-660187db2344-kube-api-access-2zl9q\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.808429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3c950f02-8f72-4d89-af10-660187db2344\") " pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.933831 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.934517 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.937771 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.940323 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.941521 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.945642 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.946588 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.947288 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lp86q" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.947901 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.951808 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.952217 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974804 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974847 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974869 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974905 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974947 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974961 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.974988 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9564b951-f1dc-471d-b442-9fc27616e8b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.975008 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9564b951-f1dc-471d-b442-9fc27616e8b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.975028 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.975056 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64kz\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-kube-api-access-v64kz\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:08 crc kubenswrapper[4821]: I0930 17:18:08.975083 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076224 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9564b951-f1dc-471d-b442-9fc27616e8b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076267 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9564b951-f1dc-471d-b442-9fc27616e8b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076288 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076322 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64kz\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-kube-api-access-v64kz\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076346 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076377 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076394 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076414 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076444 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076478 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.076496 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.077689 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.078106 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.078227 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.079330 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.080587 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.080962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9564b951-f1dc-471d-b442-9fc27616e8b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.082427 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.083518 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9564b951-f1dc-471d-b442-9fc27616e8b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.086300 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9564b951-f1dc-471d-b442-9fc27616e8b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.086667 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.096974 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64kz\" (UniqueName: \"kubernetes.io/projected/9564b951-f1dc-471d-b442-9fc27616e8b6-kube-api-access-v64kz\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.103456 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9564b951-f1dc-471d-b442-9fc27616e8b6\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.198578 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" event={"ID":"255305c2-9daa-4299-8c3c-99bf312b7cd7","Type":"ContainerStarted","Data":"8395865730c92cee323556232f075c877e3bfa9f750c2ab67976146b62d95294"} Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.200065 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" event={"ID":"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938","Type":"ContainerStarted","Data":"fb123fcc2225a3a3f12e4e458e43ea34783a640335cc50e227b7e374f20f377f"} Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.308247 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:09 crc kubenswrapper[4821]: I0930 17:18:09.401106 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.584428 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.585538 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.587902 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.588771 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.589074 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.589079 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rxxzs" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.589201 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.593293 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.605903 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610824 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610873 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610920 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610935 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610967 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvnx\" (UniqueName: \"kubernetes.io/projected/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kube-api-access-7qvnx\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.610992 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-secrets\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.611008 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.611026 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.611046 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712176 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712203 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712236 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvnx\" (UniqueName: \"kubernetes.io/projected/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kube-api-access-7qvnx\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712260 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-secrets\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712278 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712296 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712316 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712347 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.712372 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.715686 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.716690 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.717781 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.718060 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.721851 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.722255 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.734198 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-secrets\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.741522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.749478 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvnx\" (UniqueName: \"kubernetes.io/projected/4fe717f3-bd67-4fbe-9f81-ba924767f2aa-kube-api-access-7qvnx\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.818422 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4fe717f3-bd67-4fbe-9f81-ba924767f2aa\") " pod="openstack/openstack-galera-0" Sep 30 17:18:10 crc kubenswrapper[4821]: I0930 17:18:10.917287 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.574704 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.575903 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.578844 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.578971 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.579988 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.582724 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4s4gx" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.588948 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738413 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738483 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738523 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738564 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738585 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738690 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738863 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgtjv\" (UniqueName: \"kubernetes.io/projected/093b61b0-fba5-4f4e-8913-0c3700840535-kube-api-access-dgtjv\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.738947 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.739068 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.840818 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.840883 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.840908 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.840946 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.840974 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.841005 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.841043 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgtjv\" (UniqueName: \"kubernetes.io/projected/093b61b0-fba5-4f4e-8913-0c3700840535-kube-api-access-dgtjv\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.841070 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.841172 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.842708 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.842855 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.843322 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.842920 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.844433 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/093b61b0-fba5-4f4e-8913-0c3700840535-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.852190 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.860653 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.867206 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/093b61b0-fba5-4f4e-8913-0c3700840535-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.868759 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgtjv\" (UniqueName: \"kubernetes.io/projected/093b61b0-fba5-4f4e-8913-0c3700840535-kube-api-access-dgtjv\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.886408 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"093b61b0-fba5-4f4e-8913-0c3700840535\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:11 crc kubenswrapper[4821]: I0930 17:18:11.910412 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.015736 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.017544 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.022680 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kswvs" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.022929 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.027952 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.063332 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.156041 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-config-data\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.156138 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-combined-ca-bundle\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.156165 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-memcached-tls-certs\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.156239 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdzc\" (UniqueName: \"kubernetes.io/projected/96a178d8-73b6-4ba2-9976-b0544df00047-kube-api-access-xtdzc\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.156266 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-kolla-config\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdzc\" (UniqueName: \"kubernetes.io/projected/96a178d8-73b6-4ba2-9976-b0544df00047-kube-api-access-xtdzc\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258064 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-kolla-config\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258136 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-config-data\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-combined-ca-bundle\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258186 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-memcached-tls-certs\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.258973 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-kolla-config\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.259781 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a178d8-73b6-4ba2-9976-b0544df00047-config-data\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.261721 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-memcached-tls-certs\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.265021 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a178d8-73b6-4ba2-9976-b0544df00047-combined-ca-bundle\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.275061 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdzc\" (UniqueName: \"kubernetes.io/projected/96a178d8-73b6-4ba2-9976-b0544df00047-kube-api-access-xtdzc\") pod \"memcached-0\" (UID: \"96a178d8-73b6-4ba2-9976-b0544df00047\") " pod="openstack/memcached-0" Sep 30 17:18:12 crc kubenswrapper[4821]: I0930 17:18:12.356396 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 17:18:15 crc kubenswrapper[4821]: I0930 17:18:15.251053 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c950f02-8f72-4d89-af10-660187db2344","Type":"ContainerStarted","Data":"d1f543fa01830432e70da61a5f0786eb2644017db4aa40e978699d1999a71cf3"} Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.180728 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4bf4d"] Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.181932 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.186592 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.186839 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.186980 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zbkp7" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.191044 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-72fq8"] Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.192623 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.200856 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4bf4d"] Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.228543 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-72fq8"] Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289469 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-log-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289518 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mk7\" (UniqueName: \"kubernetes.io/projected/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-kube-api-access-f4mk7\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289558 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289586 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-log\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289615 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-scripts\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289630 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-etc-ovs\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289657 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289679 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-run\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289695 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-ovn-controller-tls-certs\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289716 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-lib\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289921 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcc6n\" (UniqueName: \"kubernetes.io/projected/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-kube-api-access-wcc6n\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289967 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-combined-ca-bundle\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.289983 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-scripts\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391749 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-scripts\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391789 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-etc-ovs\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391825 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391846 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-run\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391866 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-ovn-controller-tls-certs\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391883 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-lib\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391902 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcc6n\" (UniqueName: \"kubernetes.io/projected/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-kube-api-access-wcc6n\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391936 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-combined-ca-bundle\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391956 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-scripts\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.391982 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-log-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392006 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mk7\" (UniqueName: \"kubernetes.io/projected/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-kube-api-access-f4mk7\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392042 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392082 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-log\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392554 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-log\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392678 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-etc-ovs\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392797 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.392872 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-run\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.393899 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-scripts\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.394332 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-log-ovn\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.394396 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-var-run\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.394688 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-var-lib\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.395699 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-scripts\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.397774 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-combined-ca-bundle\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.401020 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-ovn-controller-tls-certs\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.409954 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mk7\" (UniqueName: \"kubernetes.io/projected/bb53300f-a5be-4cf1-a5db-7847ae0d7e12-kube-api-access-f4mk7\") pod \"ovn-controller-4bf4d\" (UID: \"bb53300f-a5be-4cf1-a5db-7847ae0d7e12\") " pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.412966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcc6n\" (UniqueName: \"kubernetes.io/projected/8e8e6a32-bf76-4c50-b5ae-15fb08fb9028-kube-api-access-wcc6n\") pod \"ovn-controller-ovs-72fq8\" (UID: \"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028\") " pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.497926 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:18 crc kubenswrapper[4821]: I0930 17:18:18.516547 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.350371 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.350709 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.963961 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.965304 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.968007 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.968399 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.968563 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.968805 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9hb5n" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.968911 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Sep 30 17:18:19 crc kubenswrapper[4821]: I0930 17:18:19.980650 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.117973 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-config\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.118046 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.118372 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skwg\" (UniqueName: \"kubernetes.io/projected/e671ec19-1ea3-4632-93fb-c2e1616b1e33-kube-api-access-9skwg\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.119332 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.119480 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.119509 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.119544 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.119681 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.179195 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.192599 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.192746 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.195127 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fgj8w" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.195366 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.195395 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.196883 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221229 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221267 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221291 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221320 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221386 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-config\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221412 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221450 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skwg\" (UniqueName: \"kubernetes.io/projected/e671ec19-1ea3-4632-93fb-c2e1616b1e33-kube-api-access-9skwg\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221491 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221582 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.221720 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.222897 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.225099 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e671ec19-1ea3-4632-93fb-c2e1616b1e33-config\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.226377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.236906 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skwg\" (UniqueName: \"kubernetes.io/projected/e671ec19-1ea3-4632-93fb-c2e1616b1e33-kube-api-access-9skwg\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.240234 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.242945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.244418 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e671ec19-1ea3-4632-93fb-c2e1616b1e33-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e671ec19-1ea3-4632-93fb-c2e1616b1e33\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.287808 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322504 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322525 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322570 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322589 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk7p\" (UniqueName: \"kubernetes.io/projected/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-kube-api-access-8tk7p\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322615 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.322648 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.424881 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.424942 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.424975 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425010 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425031 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tk7p\" (UniqueName: \"kubernetes.io/projected/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-kube-api-access-8tk7p\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425057 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425109 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.425791 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.426001 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.426834 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.427821 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.429959 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.436675 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.438675 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.441801 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tk7p\" (UniqueName: \"kubernetes.io/projected/7d955e25-1ea9-49f6-b98e-b431a8e82fa8-kube-api-access-8tk7p\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.451524 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d955e25-1ea9-49f6-b98e-b431a8e82fa8\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:20 crc kubenswrapper[4821]: I0930 17:18:20.513854 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:24 crc kubenswrapper[4821]: I0930 17:18:24.500917 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 17:18:24 crc kubenswrapper[4821]: I0930 17:18:24.760405 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.249662 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.250105 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtqxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xrzbj_openstack(35fed46b-6384-49a9-9c1e-b6bae06ad8d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.255459 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" podUID="35fed46b-6384-49a9-9c1e-b6bae06ad8d5" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.261255 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.261408 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkwq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6g8kh_openstack(255305c2-9daa-4299-8c3c-99bf312b7cd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.267328 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" podUID="255305c2-9daa-4299-8c3c-99bf312b7cd7" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.270785 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.271181 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69qlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zprcc_openstack(9abcd4e2-a109-467e-8b5f-ea49994d758f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.275852 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" podUID="9abcd4e2-a109-467e-8b5f-ea49994d758f" Sep 30 17:18:25 crc kubenswrapper[4821]: I0930 17:18:25.356313 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9564b951-f1dc-471d-b442-9fc27616e8b6","Type":"ContainerStarted","Data":"33679ed752fabca5ce166b5666c3880abdc8086bc6740d6a6c2a064dedffd76a"} Sep 30 17:18:25 crc kubenswrapper[4821]: I0930 17:18:25.357514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"96a178d8-73b6-4ba2-9976-b0544df00047","Type":"ContainerStarted","Data":"5a7d732c4f7d963cc5388466866c06039b50ec87dcfb9a7336409e1b86bf8f88"} Sep 30 17:18:25 crc kubenswrapper[4821]: E0930 17:18:25.361884 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" podUID="255305c2-9daa-4299-8c3c-99bf312b7cd7" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.851280 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.857304 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.915583 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc\") pod \"9abcd4e2-a109-467e-8b5f-ea49994d758f\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.915671 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config\") pod \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.916167 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9abcd4e2-a109-467e-8b5f-ea49994d758f" (UID: "9abcd4e2-a109-467e-8b5f-ea49994d758f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.921281 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config" (OuterVolumeSpecName: "config") pod "35fed46b-6384-49a9-9c1e-b6bae06ad8d5" (UID: "35fed46b-6384-49a9-9c1e-b6bae06ad8d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.921460 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qlf\" (UniqueName: \"kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf\") pod \"9abcd4e2-a109-467e-8b5f-ea49994d758f\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.922183 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtqxd\" (UniqueName: \"kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd\") pod \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\" (UID: \"35fed46b-6384-49a9-9c1e-b6bae06ad8d5\") " Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.922209 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config\") pod \"9abcd4e2-a109-467e-8b5f-ea49994d758f\" (UID: \"9abcd4e2-a109-467e-8b5f-ea49994d758f\") " Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.922835 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.922852 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.922929 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config" (OuterVolumeSpecName: "config") pod "9abcd4e2-a109-467e-8b5f-ea49994d758f" (UID: "9abcd4e2-a109-467e-8b5f-ea49994d758f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.924361 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf" (OuterVolumeSpecName: "kube-api-access-69qlf") pod "9abcd4e2-a109-467e-8b5f-ea49994d758f" (UID: "9abcd4e2-a109-467e-8b5f-ea49994d758f"). InnerVolumeSpecName "kube-api-access-69qlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:26 crc kubenswrapper[4821]: I0930 17:18:26.924920 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd" (OuterVolumeSpecName: "kube-api-access-jtqxd") pod "35fed46b-6384-49a9-9c1e-b6bae06ad8d5" (UID: "35fed46b-6384-49a9-9c1e-b6bae06ad8d5"). InnerVolumeSpecName "kube-api-access-jtqxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.026213 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qlf\" (UniqueName: \"kubernetes.io/projected/9abcd4e2-a109-467e-8b5f-ea49994d758f-kube-api-access-69qlf\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.026554 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtqxd\" (UniqueName: \"kubernetes.io/projected/35fed46b-6384-49a9-9c1e-b6bae06ad8d5-kube-api-access-jtqxd\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.026565 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcd4e2-a109-467e-8b5f-ea49994d758f-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.294862 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.300909 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4bf4d"] Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.306793 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:18:27 crc kubenswrapper[4821]: W0930 17:18:27.315329 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fe717f3_bd67_4fbe_9f81_ba924767f2aa.slice/crio-18c8c969b6f1f3dea0863e92bb176dac641152c63961265116be770ef9072af4 WatchSource:0}: Error finding container 18c8c969b6f1f3dea0863e92bb176dac641152c63961265116be770ef9072af4: Status 404 returned error can't find the container with id 18c8c969b6f1f3dea0863e92bb176dac641152c63961265116be770ef9072af4 Sep 30 17:18:27 crc kubenswrapper[4821]: W0930 17:18:27.317378 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb53300f_a5be_4cf1_a5db_7847ae0d7e12.slice/crio-948d8defa8b7086357814c4e3f7aed593c4c262a0df1aff9f8690bcd631e5c96 WatchSource:0}: Error finding container 948d8defa8b7086357814c4e3f7aed593c4c262a0df1aff9f8690bcd631e5c96: Status 404 returned error can't find the container with id 948d8defa8b7086357814c4e3f7aed593c4c262a0df1aff9f8690bcd631e5c96 Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.374276 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" event={"ID":"35fed46b-6384-49a9-9c1e-b6bae06ad8d5","Type":"ContainerDied","Data":"da299024d4a6d79d0f3c4d8cdb5c0d57c1bcff2a9231d5be9ddafa69297f184d"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.376155 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4fe717f3-bd67-4fbe-9f81-ba924767f2aa","Type":"ContainerStarted","Data":"18c8c969b6f1f3dea0863e92bb176dac641152c63961265116be770ef9072af4"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.381599 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c950f02-8f72-4d89-af10-660187db2344","Type":"ContainerStarted","Data":"916213117f88f5615afa53e0f0f318c2b32fa61448cbb743d00cbca29edcaa13"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.397797 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d" event={"ID":"bb53300f-a5be-4cf1-a5db-7847ae0d7e12","Type":"ContainerStarted","Data":"948d8defa8b7086357814c4e3f7aed593c4c262a0df1aff9f8690bcd631e5c96"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.397866 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" event={"ID":"9abcd4e2-a109-467e-8b5f-ea49994d758f","Type":"ContainerDied","Data":"c5c22df526d51e632f3e83a88609f28f6233aa57181748f1f081a5e0636f9521"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.402384 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9564b951-f1dc-471d-b442-9fc27616e8b6","Type":"ContainerStarted","Data":"78f71d9b6757fc260c7967bcd111603996f5e90606635f69503b3b3c7112c511"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.404781 4821 generic.go:334] "Generic (PLEG): container finished" podID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerID="79c9d68c4f78dc0f7472d18d010783aea073969c62debc1aaa61f1c639f057a4" exitCode=0 Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.404825 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" event={"ID":"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938","Type":"ContainerDied","Data":"79c9d68c4f78dc0f7472d18d010783aea073969c62debc1aaa61f1c639f057a4"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.420459 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xrzbj" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.420470 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"093b61b0-fba5-4f4e-8913-0c3700840535","Type":"ContainerStarted","Data":"99bb97f2099703cb038f2ca7fe474451e94dd2144cc33832395d387260c26319"} Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.420539 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zprcc" Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.518279 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:27 crc kubenswrapper[4821]: W0930 17:18:27.526198 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d955e25_1ea9_49f6_b98e_b431a8e82fa8.slice/crio-616e557b3c7c64e4743c7a6ecc2a308656cb57799f0496441db13a5dad22663d WatchSource:0}: Error finding container 616e557b3c7c64e4743c7a6ecc2a308656cb57799f0496441db13a5dad22663d: Status 404 returned error can't find the container with id 616e557b3c7c64e4743c7a6ecc2a308656cb57799f0496441db13a5dad22663d Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.532548 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zprcc"] Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.546345 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.558027 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:27 crc kubenswrapper[4821]: I0930 17:18:27.564376 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xrzbj"] Sep 30 17:18:27 crc kubenswrapper[4821]: E0930 17:18:27.598641 4821 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Sep 30 17:18:27 crc kubenswrapper[4821]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 17:18:27 crc kubenswrapper[4821]: > podSandboxID="fb123fcc2225a3a3f12e4e458e43ea34783a640335cc50e227b7e374f20f377f" Sep 30 17:18:27 crc kubenswrapper[4821]: E0930 17:18:27.598834 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 30 17:18:27 crc kubenswrapper[4821]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hnt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-rbz4b_openstack(5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 17:18:27 crc kubenswrapper[4821]: > logger="UnhandledError" Sep 30 17:18:27 crc kubenswrapper[4821]: E0930 17:18:27.600048 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" Sep 30 17:18:28 crc kubenswrapper[4821]: I0930 17:18:28.156105 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-72fq8"] Sep 30 17:18:28 crc kubenswrapper[4821]: I0930 17:18:28.335664 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:18:28 crc kubenswrapper[4821]: I0930 17:18:28.430054 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d955e25-1ea9-49f6-b98e-b431a8e82fa8","Type":"ContainerStarted","Data":"616e557b3c7c64e4743c7a6ecc2a308656cb57799f0496441db13a5dad22663d"} Sep 30 17:18:28 crc kubenswrapper[4821]: W0930 17:18:28.461008 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8e6a32_bf76_4c50_b5ae_15fb08fb9028.slice/crio-1c726b1cc7e03c85059dce33f4b179d13d3d36d106e319b7be0a602cf4608250 WatchSource:0}: Error finding container 1c726b1cc7e03c85059dce33f4b179d13d3d36d106e319b7be0a602cf4608250: Status 404 returned error can't find the container with id 1c726b1cc7e03c85059dce33f4b179d13d3d36d106e319b7be0a602cf4608250 Sep 30 17:18:28 crc kubenswrapper[4821]: I0930 17:18:28.718987 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fed46b-6384-49a9-9c1e-b6bae06ad8d5" path="/var/lib/kubelet/pods/35fed46b-6384-49a9-9c1e-b6bae06ad8d5/volumes" Sep 30 17:18:28 crc kubenswrapper[4821]: I0930 17:18:28.719845 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abcd4e2-a109-467e-8b5f-ea49994d758f" path="/var/lib/kubelet/pods/9abcd4e2-a109-467e-8b5f-ea49994d758f/volumes" Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.440918 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" event={"ID":"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938","Type":"ContainerStarted","Data":"29c34a2df4624140c69fb562fb6e38fd6ee54ec8b4bc7c661af8212cf5a09d22"} Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.441384 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.444481 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-72fq8" event={"ID":"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028","Type":"ContainerStarted","Data":"1c726b1cc7e03c85059dce33f4b179d13d3d36d106e319b7be0a602cf4608250"} Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.460787 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" podStartSLOduration=4.085372168 podStartE2EDuration="22.460768664s" podCreationTimestamp="2025-09-30 17:18:07 +0000 UTC" firstStartedPulling="2025-09-30 17:18:08.345542455 +0000 UTC m=+884.250588399" lastFinishedPulling="2025-09-30 17:18:26.720938951 +0000 UTC m=+902.625984895" observedRunningTime="2025-09-30 17:18:29.460716273 +0000 UTC m=+905.365762217" watchObservedRunningTime="2025-09-30 17:18:29.460768664 +0000 UTC m=+905.365814608" Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.465635 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"96a178d8-73b6-4ba2-9976-b0544df00047","Type":"ContainerStarted","Data":"e6e9d9c700ed9333ba7145232ed57655ba00fda9ed30404104eaa356ca88cd64"} Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.465782 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.467693 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e671ec19-1ea3-4632-93fb-c2e1616b1e33","Type":"ContainerStarted","Data":"d15afcd43ac52e539c9393fa2149efedc66347d045b140d2389aa75f723a324f"} Sep 30 17:18:29 crc kubenswrapper[4821]: I0930 17:18:29.485024 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.223589653 podStartE2EDuration="18.485008806s" podCreationTimestamp="2025-09-30 17:18:11 +0000 UTC" firstStartedPulling="2025-09-30 17:18:25.244640147 +0000 UTC m=+901.149686091" lastFinishedPulling="2025-09-30 17:18:28.5060593 +0000 UTC m=+904.411105244" observedRunningTime="2025-09-30 17:18:29.484149625 +0000 UTC m=+905.389195569" watchObservedRunningTime="2025-09-30 17:18:29.485008806 +0000 UTC m=+905.390054750" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.012922 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mlhfm"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.014190 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.016472 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.039172 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mlhfm"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122514 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-combined-ca-bundle\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122602 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovs-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122662 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovn-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122688 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdhm\" (UniqueName: \"kubernetes.io/projected/90a47708-f26b-40cc-b6c7-a436b54470e1-kube-api-access-mgdhm\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122752 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a47708-f26b-40cc-b6c7-a436b54470e1-config\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.122804 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.192218 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224684 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224785 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-combined-ca-bundle\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224812 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovs-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224846 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovn-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224873 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdhm\" (UniqueName: \"kubernetes.io/projected/90a47708-f26b-40cc-b6c7-a436b54470e1-kube-api-access-mgdhm\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.224919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a47708-f26b-40cc-b6c7-a436b54470e1-config\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.225256 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovs-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.225278 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/90a47708-f26b-40cc-b6c7-a436b54470e1-ovn-rundir\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.226129 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a47708-f26b-40cc-b6c7-a436b54470e1-config\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.254022 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-combined-ca-bundle\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.257674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a47708-f26b-40cc-b6c7-a436b54470e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.261148 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.262378 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.275289 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.287654 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.287693 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdhm\" (UniqueName: \"kubernetes.io/projected/90a47708-f26b-40cc-b6c7-a436b54470e1-kube-api-access-mgdhm\") pod \"ovn-controller-metrics-mlhfm\" (UID: \"90a47708-f26b-40cc-b6c7-a436b54470e1\") " pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.333972 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mlhfm" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.402153 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.402351 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="dnsmasq-dns" containerID="cri-o://29c34a2df4624140c69fb562fb6e38fd6ee54ec8b4bc7c661af8212cf5a09d22" gracePeriod=10 Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.426238 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.427407 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.439074 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.439776 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhps\" (UniqueName: \"kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.439822 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.439921 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.439956 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.463235 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjhps\" (UniqueName: \"kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541278 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541514 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541537 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541558 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgv7\" (UniqueName: \"kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541591 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.541630 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.542713 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.543238 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.543697 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.562557 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjhps\" (UniqueName: \"kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps\") pod \"dnsmasq-dns-7f896c8c65-vj24r\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.636468 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643191 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643268 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cgv7\" (UniqueName: \"kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643293 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643322 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643348 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.643964 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.644249 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.644428 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.644572 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.659652 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cgv7\" (UniqueName: \"kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7\") pod \"dnsmasq-dns-86db49b7ff-hqt9z\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:32 crc kubenswrapper[4821]: I0930 17:18:32.760373 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.496966 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.525676 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" event={"ID":"255305c2-9daa-4299-8c3c-99bf312b7cd7","Type":"ContainerDied","Data":"8395865730c92cee323556232f075c877e3bfa9f750c2ab67976146b62d95294"} Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.525756 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6g8kh" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.533218 4821 generic.go:334] "Generic (PLEG): container finished" podID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerID="29c34a2df4624140c69fb562fb6e38fd6ee54ec8b4bc7c661af8212cf5a09d22" exitCode=0 Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.533363 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" event={"ID":"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938","Type":"ContainerDied","Data":"29c34a2df4624140c69fb562fb6e38fd6ee54ec8b4bc7c661af8212cf5a09d22"} Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.657184 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwq7\" (UniqueName: \"kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7\") pod \"255305c2-9daa-4299-8c3c-99bf312b7cd7\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.658316 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc\") pod \"255305c2-9daa-4299-8c3c-99bf312b7cd7\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.658359 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config\") pod \"255305c2-9daa-4299-8c3c-99bf312b7cd7\" (UID: \"255305c2-9daa-4299-8c3c-99bf312b7cd7\") " Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.660093 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config" (OuterVolumeSpecName: "config") pod "255305c2-9daa-4299-8c3c-99bf312b7cd7" (UID: "255305c2-9daa-4299-8c3c-99bf312b7cd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.661015 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "255305c2-9daa-4299-8c3c-99bf312b7cd7" (UID: "255305c2-9daa-4299-8c3c-99bf312b7cd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.690276 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7" (OuterVolumeSpecName: "kube-api-access-xkwq7") pod "255305c2-9daa-4299-8c3c-99bf312b7cd7" (UID: "255305c2-9daa-4299-8c3c-99bf312b7cd7"). InnerVolumeSpecName "kube-api-access-xkwq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.760749 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwq7\" (UniqueName: \"kubernetes.io/projected/255305c2-9daa-4299-8c3c-99bf312b7cd7-kube-api-access-xkwq7\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.760783 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.760793 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255305c2-9daa-4299-8c3c-99bf312b7cd7-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.903829 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:33 crc kubenswrapper[4821]: I0930 17:18:33.906948 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6g8kh"] Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.027592 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.166261 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hnt6\" (UniqueName: \"kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6\") pod \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.166376 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config\") pod \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.166431 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc\") pod \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\" (UID: \"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938\") " Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.173420 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6" (OuterVolumeSpecName: "kube-api-access-6hnt6") pod "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" (UID: "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938"). InnerVolumeSpecName "kube-api-access-6hnt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.217800 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config" (OuterVolumeSpecName: "config") pod "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" (UID: "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.241638 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" (UID: "5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.269052 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hnt6\" (UniqueName: \"kubernetes.io/projected/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-kube-api-access-6hnt6\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.269103 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.269113 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.541659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" event={"ID":"5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938","Type":"ContainerDied","Data":"fb123fcc2225a3a3f12e4e458e43ea34783a640335cc50e227b7e374f20f377f"} Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.541710 4821 scope.go:117] "RemoveContainer" containerID="29c34a2df4624140c69fb562fb6e38fd6ee54ec8b4bc7c661af8212cf5a09d22" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.541810 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rbz4b" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.578347 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.584551 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rbz4b"] Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.722073 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255305c2-9daa-4299-8c3c-99bf312b7cd7" path="/var/lib/kubelet/pods/255305c2-9daa-4299-8c3c-99bf312b7cd7/volumes" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.722470 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" path="/var/lib/kubelet/pods/5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938/volumes" Sep 30 17:18:34 crc kubenswrapper[4821]: I0930 17:18:34.862374 4821 scope.go:117] "RemoveContainer" containerID="79c9d68c4f78dc0f7472d18d010783aea073969c62debc1aaa61f1c639f057a4" Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.379541 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mlhfm"] Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.389774 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:18:35 crc kubenswrapper[4821]: W0930 17:18:35.397122 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90a47708_f26b_40cc_b6c7_a436b54470e1.slice/crio-aa8b210c78bb3f8102e2236bac7c8903d20cc7dc898587f59faabcaefa8a8561 WatchSource:0}: Error finding container aa8b210c78bb3f8102e2236bac7c8903d20cc7dc898587f59faabcaefa8a8561: Status 404 returned error can't find the container with id aa8b210c78bb3f8102e2236bac7c8903d20cc7dc898587f59faabcaefa8a8561 Sep 30 17:18:35 crc kubenswrapper[4821]: W0930 17:18:35.406512 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f2f19e5_bd02_4369_b594_ee71c4c83509.slice/crio-fc72537958f366b138ef658a9cdcbc51c2741c18d480f817ed47f983f2330f4e WatchSource:0}: Error finding container fc72537958f366b138ef658a9cdcbc51c2741c18d480f817ed47f983f2330f4e: Status 404 returned error can't find the container with id fc72537958f366b138ef658a9cdcbc51c2741c18d480f817ed47f983f2330f4e Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.501678 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:35 crc kubenswrapper[4821]: W0930 17:18:35.521448 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc435e242_741e_434a_af32_8e45ca9cdb1f.slice/crio-6908dece30c629abd6787433daff770c61eff7c526d9d5bfe17724a0952e7bee WatchSource:0}: Error finding container 6908dece30c629abd6787433daff770c61eff7c526d9d5bfe17724a0952e7bee: Status 404 returned error can't find the container with id 6908dece30c629abd6787433daff770c61eff7c526d9d5bfe17724a0952e7bee Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.555831 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mlhfm" event={"ID":"90a47708-f26b-40cc-b6c7-a436b54470e1","Type":"ContainerStarted","Data":"aa8b210c78bb3f8102e2236bac7c8903d20cc7dc898587f59faabcaefa8a8561"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.561487 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-72fq8" event={"ID":"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028","Type":"ContainerStarted","Data":"b2f665f62ce2e7e4c5f269c7bbff5f15c06432d036dfd6c2cdafb39ccb668d96"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.564895 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4fe717f3-bd67-4fbe-9f81-ba924767f2aa","Type":"ContainerStarted","Data":"ea8352a5f781b835026a48b4e89bc5a9e5aa31ef07e385e3712da225e075ffd3"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.570364 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"093b61b0-fba5-4f4e-8913-0c3700840535","Type":"ContainerStarted","Data":"dd1bdb298cd3aebeec45b8d33354b58e3efb4aced5708c024c0f739db0202fdf"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.588630 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" event={"ID":"c435e242-741e-434a-af32-8e45ca9cdb1f","Type":"ContainerStarted","Data":"6908dece30c629abd6787433daff770c61eff7c526d9d5bfe17724a0952e7bee"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.597389 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d" event={"ID":"bb53300f-a5be-4cf1-a5db-7847ae0d7e12","Type":"ContainerStarted","Data":"11bcc49688040f3e65f1756311d9ef62569e80e0ccbfcd6556706c0a8a77d233"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.597464 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4bf4d" Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.601348 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" event={"ID":"2f2f19e5-bd02-4369-b594-ee71c4c83509","Type":"ContainerStarted","Data":"fc72537958f366b138ef658a9cdcbc51c2741c18d480f817ed47f983f2330f4e"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.607739 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d955e25-1ea9-49f6-b98e-b431a8e82fa8","Type":"ContainerStarted","Data":"db65eeb51801f530b7d254834c5f2e4734d1f1706db7e93978c8e86d3935ccf7"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.614975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e671ec19-1ea3-4632-93fb-c2e1616b1e33","Type":"ContainerStarted","Data":"cc0cb8b9abea4b3366217ccf78c0a0dddf71cc4e49ad3018f04736932fecc4e1"} Sep 30 17:18:35 crc kubenswrapper[4821]: I0930 17:18:35.663809 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4bf4d" podStartSLOduration=10.098053112 podStartE2EDuration="17.663791615s" podCreationTimestamp="2025-09-30 17:18:18 +0000 UTC" firstStartedPulling="2025-09-30 17:18:27.319902015 +0000 UTC m=+903.224947949" lastFinishedPulling="2025-09-30 17:18:34.885640488 +0000 UTC m=+910.790686452" observedRunningTime="2025-09-30 17:18:35.659194571 +0000 UTC m=+911.564240515" watchObservedRunningTime="2025-09-30 17:18:35.663791615 +0000 UTC m=+911.568837559" Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.627995 4821 generic.go:334] "Generic (PLEG): container finished" podID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerID="02d11ea930c4e42ecd9dc405f4a06a52a1a0d9109c326a6f197349fc275fa6ae" exitCode=0 Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.628106 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" event={"ID":"2f2f19e5-bd02-4369-b594-ee71c4c83509","Type":"ContainerDied","Data":"02d11ea930c4e42ecd9dc405f4a06a52a1a0d9109c326a6f197349fc275fa6ae"} Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.641645 4821 generic.go:334] "Generic (PLEG): container finished" podID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerID="a3686e25b64baee6a52422c086c3cd7b42b3117876e631ebbb7f4b310e65a835" exitCode=0 Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.641768 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" event={"ID":"c435e242-741e-434a-af32-8e45ca9cdb1f","Type":"ContainerDied","Data":"a3686e25b64baee6a52422c086c3cd7b42b3117876e631ebbb7f4b310e65a835"} Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.644423 4821 generic.go:334] "Generic (PLEG): container finished" podID="8e8e6a32-bf76-4c50-b5ae-15fb08fb9028" containerID="b2f665f62ce2e7e4c5f269c7bbff5f15c06432d036dfd6c2cdafb39ccb668d96" exitCode=0 Sep 30 17:18:36 crc kubenswrapper[4821]: I0930 17:18:36.645171 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-72fq8" event={"ID":"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028","Type":"ContainerDied","Data":"b2f665f62ce2e7e4c5f269c7bbff5f15c06432d036dfd6c2cdafb39ccb668d96"} Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.358274 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.672138 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-72fq8" event={"ID":"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028","Type":"ContainerStarted","Data":"3915e20867c01998857a83e4cb22968e5c21feb55df7ba5d2cf04608263354f1"} Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.672184 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-72fq8" event={"ID":"8e8e6a32-bf76-4c50-b5ae-15fb08fb9028","Type":"ContainerStarted","Data":"53f2e043ed97ad0e05fc515c3efb3b7b5f443fe56823844a8f71531ccdd073f0"} Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.673187 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.673225 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.677623 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" event={"ID":"2f2f19e5-bd02-4369-b594-ee71c4c83509","Type":"ContainerStarted","Data":"dd8bf1837b14ce49342b508228007f30436900c3b57e32f16d73b17d9b8f629a"} Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.678388 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.680974 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" event={"ID":"c435e242-741e-434a-af32-8e45ca9cdb1f","Type":"ContainerStarted","Data":"7ae2425506754fd72098cf6770a94a6ed0f2ccd212025142b7b94fa68e33c8ee"} Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.681428 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.695630 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-72fq8" podStartSLOduration=13.322850776 podStartE2EDuration="19.695612364s" podCreationTimestamp="2025-09-30 17:18:18 +0000 UTC" firstStartedPulling="2025-09-30 17:18:28.465526433 +0000 UTC m=+904.370572377" lastFinishedPulling="2025-09-30 17:18:34.838288021 +0000 UTC m=+910.743333965" observedRunningTime="2025-09-30 17:18:37.688361344 +0000 UTC m=+913.593407288" watchObservedRunningTime="2025-09-30 17:18:37.695612364 +0000 UTC m=+913.600658308" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.719071 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" podStartSLOduration=5.719056567 podStartE2EDuration="5.719056567s" podCreationTimestamp="2025-09-30 17:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:37.715911379 +0000 UTC m=+913.620957323" watchObservedRunningTime="2025-09-30 17:18:37.719056567 +0000 UTC m=+913.624102511" Sep 30 17:18:37 crc kubenswrapper[4821]: I0930 17:18:37.752663 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" podStartSLOduration=5.752639882 podStartE2EDuration="5.752639882s" podCreationTimestamp="2025-09-30 17:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:37.73969488 +0000 UTC m=+913.644740824" watchObservedRunningTime="2025-09-30 17:18:37.752639882 +0000 UTC m=+913.657685836" Sep 30 17:18:39 crc kubenswrapper[4821]: I0930 17:18:39.695602 4821 generic.go:334] "Generic (PLEG): container finished" podID="093b61b0-fba5-4f4e-8913-0c3700840535" containerID="dd1bdb298cd3aebeec45b8d33354b58e3efb4aced5708c024c0f739db0202fdf" exitCode=0 Sep 30 17:18:39 crc kubenswrapper[4821]: I0930 17:18:39.695685 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"093b61b0-fba5-4f4e-8913-0c3700840535","Type":"ContainerDied","Data":"dd1bdb298cd3aebeec45b8d33354b58e3efb4aced5708c024c0f739db0202fdf"} Sep 30 17:18:39 crc kubenswrapper[4821]: I0930 17:18:39.700586 4821 generic.go:334] "Generic (PLEG): container finished" podID="4fe717f3-bd67-4fbe-9f81-ba924767f2aa" containerID="ea8352a5f781b835026a48b4e89bc5a9e5aa31ef07e385e3712da225e075ffd3" exitCode=0 Sep 30 17:18:39 crc kubenswrapper[4821]: I0930 17:18:39.700776 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4fe717f3-bd67-4fbe-9f81-ba924767f2aa","Type":"ContainerDied","Data":"ea8352a5f781b835026a48b4e89bc5a9e5aa31ef07e385e3712da225e075ffd3"} Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.638709 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.725202 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4fe717f3-bd67-4fbe-9f81-ba924767f2aa","Type":"ContainerStarted","Data":"86ac16259b78dbbcfa5e4bfd02e4854f7f10e5fce8ab26eecf2a97d9ce5f7e56"} Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.730144 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"093b61b0-fba5-4f4e-8913-0c3700840535","Type":"ContainerStarted","Data":"2f28059d05bff10c02fcda4c7c8173958c46fb1d58a24aea12af2ad3d476bc65"} Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.750593 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.183728885 podStartE2EDuration="33.750573196s" podCreationTimestamp="2025-09-30 17:18:09 +0000 UTC" firstStartedPulling="2025-09-30 17:18:27.318574142 +0000 UTC m=+903.223620086" lastFinishedPulling="2025-09-30 17:18:34.885418453 +0000 UTC m=+910.790464397" observedRunningTime="2025-09-30 17:18:42.744758622 +0000 UTC m=+918.649804566" watchObservedRunningTime="2025-09-30 17:18:42.750573196 +0000 UTC m=+918.655619150" Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.764813 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.806986 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.277944997 podStartE2EDuration="32.806968338s" podCreationTimestamp="2025-09-30 17:18:10 +0000 UTC" firstStartedPulling="2025-09-30 17:18:27.30925721 +0000 UTC m=+903.214303154" lastFinishedPulling="2025-09-30 17:18:34.838280561 +0000 UTC m=+910.743326495" observedRunningTime="2025-09-30 17:18:42.773010335 +0000 UTC m=+918.678056279" watchObservedRunningTime="2025-09-30 17:18:42.806968338 +0000 UTC m=+918.712014282" Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.843328 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:42 crc kubenswrapper[4821]: I0930 17:18:42.843569 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="dnsmasq-dns" containerID="cri-o://7ae2425506754fd72098cf6770a94a6ed0f2ccd212025142b7b94fa68e33c8ee" gracePeriod=10 Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.740123 4821 generic.go:334] "Generic (PLEG): container finished" podID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerID="7ae2425506754fd72098cf6770a94a6ed0f2ccd212025142b7b94fa68e33c8ee" exitCode=0 Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.740205 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" event={"ID":"c435e242-741e-434a-af32-8e45ca9cdb1f","Type":"ContainerDied","Data":"7ae2425506754fd72098cf6770a94a6ed0f2ccd212025142b7b94fa68e33c8ee"} Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.810572 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.959604 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjhps\" (UniqueName: \"kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps\") pod \"c435e242-741e-434a-af32-8e45ca9cdb1f\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.959683 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config\") pod \"c435e242-741e-434a-af32-8e45ca9cdb1f\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.959713 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb\") pod \"c435e242-741e-434a-af32-8e45ca9cdb1f\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.959869 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc\") pod \"c435e242-741e-434a-af32-8e45ca9cdb1f\" (UID: \"c435e242-741e-434a-af32-8e45ca9cdb1f\") " Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.978299 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps" (OuterVolumeSpecName: "kube-api-access-cjhps") pod "c435e242-741e-434a-af32-8e45ca9cdb1f" (UID: "c435e242-741e-434a-af32-8e45ca9cdb1f"). InnerVolumeSpecName "kube-api-access-cjhps". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:43 crc kubenswrapper[4821]: I0930 17:18:43.997462 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c435e242-741e-434a-af32-8e45ca9cdb1f" (UID: "c435e242-741e-434a-af32-8e45ca9cdb1f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.012007 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config" (OuterVolumeSpecName: "config") pod "c435e242-741e-434a-af32-8e45ca9cdb1f" (UID: "c435e242-741e-434a-af32-8e45ca9cdb1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.017463 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c435e242-741e-434a-af32-8e45ca9cdb1f" (UID: "c435e242-741e-434a-af32-8e45ca9cdb1f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.061586 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjhps\" (UniqueName: \"kubernetes.io/projected/c435e242-741e-434a-af32-8e45ca9cdb1f-kube-api-access-cjhps\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.061631 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.061642 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.061650 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c435e242-741e-434a-af32-8e45ca9cdb1f-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.749659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d955e25-1ea9-49f6-b98e-b431a8e82fa8","Type":"ContainerStarted","Data":"6c8f2c7c0ee4da78c99299a79f1e3b2f0f442d9792f0e0bc2074b0bdb0739009"} Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.751555 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e671ec19-1ea3-4632-93fb-c2e1616b1e33","Type":"ContainerStarted","Data":"c0f0a7a2d66b1770f5d61ea1ddc09c8237d124c30163400c8e5e2993707715a3"} Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.754725 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" event={"ID":"c435e242-741e-434a-af32-8e45ca9cdb1f","Type":"ContainerDied","Data":"6908dece30c629abd6787433daff770c61eff7c526d9d5bfe17724a0952e7bee"} Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.754773 4821 scope.go:117] "RemoveContainer" containerID="7ae2425506754fd72098cf6770a94a6ed0f2ccd212025142b7b94fa68e33c8ee" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.754852 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vj24r" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.758801 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mlhfm" event={"ID":"90a47708-f26b-40cc-b6c7-a436b54470e1","Type":"ContainerStarted","Data":"7f9100a81fa226c5f7d220f50bca6c051fb8954660c60388e4d962d62b460a70"} Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.774773 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.20342151 podStartE2EDuration="25.774755156s" podCreationTimestamp="2025-09-30 17:18:19 +0000 UTC" firstStartedPulling="2025-09-30 17:18:27.535292298 +0000 UTC m=+903.440338242" lastFinishedPulling="2025-09-30 17:18:44.106625944 +0000 UTC m=+920.011671888" observedRunningTime="2025-09-30 17:18:44.772725566 +0000 UTC m=+920.677771520" watchObservedRunningTime="2025-09-30 17:18:44.774755156 +0000 UTC m=+920.679801100" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.789185 4821 scope.go:117] "RemoveContainer" containerID="a3686e25b64baee6a52422c086c3cd7b42b3117876e631ebbb7f4b310e65a835" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.802327 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.174797328 podStartE2EDuration="26.802308041s" podCreationTimestamp="2025-09-30 17:18:18 +0000 UTC" firstStartedPulling="2025-09-30 17:18:28.460811746 +0000 UTC m=+904.365857690" lastFinishedPulling="2025-09-30 17:18:44.088322459 +0000 UTC m=+919.993368403" observedRunningTime="2025-09-30 17:18:44.79622013 +0000 UTC m=+920.701266074" watchObservedRunningTime="2025-09-30 17:18:44.802308041 +0000 UTC m=+920.707354055" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.827998 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mlhfm" podStartSLOduration=5.101195275 podStartE2EDuration="13.827982399s" podCreationTimestamp="2025-09-30 17:18:31 +0000 UTC" firstStartedPulling="2025-09-30 17:18:35.398729339 +0000 UTC m=+911.303775273" lastFinishedPulling="2025-09-30 17:18:44.125516453 +0000 UTC m=+920.030562397" observedRunningTime="2025-09-30 17:18:44.826283377 +0000 UTC m=+920.731329321" watchObservedRunningTime="2025-09-30 17:18:44.827982399 +0000 UTC m=+920.733028343" Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.896549 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:44 crc kubenswrapper[4821]: I0930 17:18:44.902287 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vj24r"] Sep 30 17:18:45 crc kubenswrapper[4821]: I0930 17:18:45.290131 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:45 crc kubenswrapper[4821]: I0930 17:18:45.514520 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:46 crc kubenswrapper[4821]: I0930 17:18:46.716703 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" path="/var/lib/kubelet/pods/c435e242-741e-434a-af32-8e45ca9cdb1f/volumes" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.290403 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.339855 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.514458 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.556388 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.818217 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Sep 30 17:18:47 crc kubenswrapper[4821]: I0930 17:18:47.829074 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.225721 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:18:48 crc kubenswrapper[4821]: E0930 17:18:48.226005 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226020 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: E0930 17:18:48.226037 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="init" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226045 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="init" Sep 30 17:18:48 crc kubenswrapper[4821]: E0930 17:18:48.226056 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226062 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: E0930 17:18:48.226103 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="init" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226111 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="init" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226247 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c435e242-741e-434a-af32-8e45ca9cdb1f" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.226264 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da5e1bb-4ef6-4ff1-9d9d-af8e88fcb938" containerName="dnsmasq-dns" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.227049 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.229499 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vzpwn" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.229656 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.230231 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.230362 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.239744 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323119 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323180 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323251 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-config\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323287 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-scripts\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323309 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dznf\" (UniqueName: \"kubernetes.io/projected/24bfc749-e770-4b60-95b1-869a694a9d70-kube-api-access-2dznf\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.323347 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425225 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425309 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425339 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425541 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-config\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425983 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.425591 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-scripts\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.426052 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dznf\" (UniqueName: \"kubernetes.io/projected/24bfc749-e770-4b60-95b1-869a694a9d70-kube-api-access-2dznf\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.426846 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-config\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.427289 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24bfc749-e770-4b60-95b1-869a694a9d70-scripts\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.427371 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.431802 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.440204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.443554 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfc749-e770-4b60-95b1-869a694a9d70-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.467135 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dznf\" (UniqueName: \"kubernetes.io/projected/24bfc749-e770-4b60-95b1-869a694a9d70-kube-api-access-2dznf\") pod \"ovn-northd-0\" (UID: \"24bfc749-e770-4b60-95b1-869a694a9d70\") " pod="openstack/ovn-northd-0" Sep 30 17:18:48 crc kubenswrapper[4821]: I0930 17:18:48.551813 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 17:18:49 crc kubenswrapper[4821]: I0930 17:18:49.028214 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:18:49 crc kubenswrapper[4821]: I0930 17:18:49.350153 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:18:49 crc kubenswrapper[4821]: I0930 17:18:49.350211 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:18:49 crc kubenswrapper[4821]: I0930 17:18:49.799697 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24bfc749-e770-4b60-95b1-869a694a9d70","Type":"ContainerStarted","Data":"5108f6ea9f98aa24d1f16d86901b6e28982b19139f137002fa75026845a00f77"} Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.807799 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24bfc749-e770-4b60-95b1-869a694a9d70","Type":"ContainerStarted","Data":"e763afa46bc411dfed69d665a461e2995a4b4c3402931303cca390ebedfda535"} Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.808427 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.808445 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"24bfc749-e770-4b60-95b1-869a694a9d70","Type":"ContainerStarted","Data":"7f002689c6e85e7c07c1289ecc81bcb329f602fa4bad6cba8e53e95b94901289"} Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.825979 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.539973438 podStartE2EDuration="2.825949934s" podCreationTimestamp="2025-09-30 17:18:48 +0000 UTC" firstStartedPulling="2025-09-30 17:18:49.038456546 +0000 UTC m=+924.943502490" lastFinishedPulling="2025-09-30 17:18:50.324433042 +0000 UTC m=+926.229478986" observedRunningTime="2025-09-30 17:18:50.821792101 +0000 UTC m=+926.726838045" watchObservedRunningTime="2025-09-30 17:18:50.825949934 +0000 UTC m=+926.730995878" Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.918159 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.918423 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Sep 30 17:18:50 crc kubenswrapper[4821]: I0930 17:18:50.964120 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Sep 30 17:18:51 crc kubenswrapper[4821]: I0930 17:18:51.860545 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Sep 30 17:18:51 crc kubenswrapper[4821]: I0930 17:18:51.911801 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:51 crc kubenswrapper[4821]: I0930 17:18:51.912159 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:51 crc kubenswrapper[4821]: I0930 17:18:51.972047 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.401795 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4j6r4"] Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.402809 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.416243 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4j6r4"] Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.526519 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwvh\" (UniqueName: \"kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh\") pod \"placement-db-create-4j6r4\" (UID: \"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f\") " pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.627568 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwvh\" (UniqueName: \"kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh\") pod \"placement-db-create-4j6r4\" (UID: \"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f\") " pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.659900 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwvh\" (UniqueName: \"kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh\") pod \"placement-db-create-4j6r4\" (UID: \"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f\") " pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.718923 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:52 crc kubenswrapper[4821]: I0930 17:18:52.909956 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Sep 30 17:18:53 crc kubenswrapper[4821]: I0930 17:18:53.172219 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4j6r4"] Sep 30 17:18:53 crc kubenswrapper[4821]: I0930 17:18:53.852849 4821 generic.go:334] "Generic (PLEG): container finished" podID="4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" containerID="f5979dec0f7d1d1c8fb457a5a8d5ed103279b6c225851f41508858cfe6624709" exitCode=0 Sep 30 17:18:53 crc kubenswrapper[4821]: I0930 17:18:53.852945 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j6r4" event={"ID":"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f","Type":"ContainerDied","Data":"f5979dec0f7d1d1c8fb457a5a8d5ed103279b6c225851f41508858cfe6624709"} Sep 30 17:18:53 crc kubenswrapper[4821]: I0930 17:18:53.853212 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j6r4" event={"ID":"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f","Type":"ContainerStarted","Data":"c67d65798db6e7e27ab9efbfa64c3b8fb20b2caea0180ba3fc02c0cccaf1d91b"} Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.175152 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.266633 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwvh\" (UniqueName: \"kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh\") pod \"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f\" (UID: \"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f\") " Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.271976 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh" (OuterVolumeSpecName: "kube-api-access-2kwvh") pod "4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" (UID: "4fb6d6a0-c0b6-4260-bcbd-b8af003f848f"). InnerVolumeSpecName "kube-api-access-2kwvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.367811 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwvh\" (UniqueName: \"kubernetes.io/projected/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f-kube-api-access-2kwvh\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.868814 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j6r4" event={"ID":"4fb6d6a0-c0b6-4260-bcbd-b8af003f848f","Type":"ContainerDied","Data":"c67d65798db6e7e27ab9efbfa64c3b8fb20b2caea0180ba3fc02c0cccaf1d91b"} Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.869404 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67d65798db6e7e27ab9efbfa64c3b8fb20b2caea0180ba3fc02c0cccaf1d91b" Sep 30 17:18:55 crc kubenswrapper[4821]: I0930 17:18:55.868937 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j6r4" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.564644 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fmj6j"] Sep 30 17:18:57 crc kubenswrapper[4821]: E0930 17:18:57.564942 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" containerName="mariadb-database-create" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.564954 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" containerName="mariadb-database-create" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.565124 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" containerName="mariadb-database-create" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.565699 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fmj6j" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.580512 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fmj6j"] Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.602638 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8pn\" (UniqueName: \"kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn\") pod \"glance-db-create-fmj6j\" (UID: \"4b9935cd-21ae-4020-b970-1ad0bc26b130\") " pod="openstack/glance-db-create-fmj6j" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.704402 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht8pn\" (UniqueName: \"kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn\") pod \"glance-db-create-fmj6j\" (UID: \"4b9935cd-21ae-4020-b970-1ad0bc26b130\") " pod="openstack/glance-db-create-fmj6j" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.729349 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht8pn\" (UniqueName: \"kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn\") pod \"glance-db-create-fmj6j\" (UID: \"4b9935cd-21ae-4020-b970-1ad0bc26b130\") " pod="openstack/glance-db-create-fmj6j" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.880915 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fmj6j" Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.883449 4821 generic.go:334] "Generic (PLEG): container finished" podID="9564b951-f1dc-471d-b442-9fc27616e8b6" containerID="78f71d9b6757fc260c7967bcd111603996f5e90606635f69503b3b3c7112c511" exitCode=0 Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.883517 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9564b951-f1dc-471d-b442-9fc27616e8b6","Type":"ContainerDied","Data":"78f71d9b6757fc260c7967bcd111603996f5e90606635f69503b3b3c7112c511"} Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.884925 4821 generic.go:334] "Generic (PLEG): container finished" podID="3c950f02-8f72-4d89-af10-660187db2344" containerID="916213117f88f5615afa53e0f0f318c2b32fa61448cbb743d00cbca29edcaa13" exitCode=0 Sep 30 17:18:57 crc kubenswrapper[4821]: I0930 17:18:57.884957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c950f02-8f72-4d89-af10-660187db2344","Type":"ContainerDied","Data":"916213117f88f5615afa53e0f0f318c2b32fa61448cbb743d00cbca29edcaa13"} Sep 30 17:18:58 crc kubenswrapper[4821]: W0930 17:18:58.351548 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9935cd_21ae_4020_b970_1ad0bc26b130.slice/crio-086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8 WatchSource:0}: Error finding container 086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8: Status 404 returned error can't find the container with id 086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8 Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.353853 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fmj6j"] Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.895964 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9564b951-f1dc-471d-b442-9fc27616e8b6","Type":"ContainerStarted","Data":"21f89bde538154cff1998943ece2403222d92bfd7cac2854125ed39d5cd2a931"} Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.896536 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.898640 4821 generic.go:334] "Generic (PLEG): container finished" podID="4b9935cd-21ae-4020-b970-1ad0bc26b130" containerID="b122a5c3c8a7f2e2b13e62e079dd944939a901e4c28bed1302d4e2332a0df4ea" exitCode=0 Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.898671 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fmj6j" event={"ID":"4b9935cd-21ae-4020-b970-1ad0bc26b130","Type":"ContainerDied","Data":"b122a5c3c8a7f2e2b13e62e079dd944939a901e4c28bed1302d4e2332a0df4ea"} Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.898707 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fmj6j" event={"ID":"4b9935cd-21ae-4020-b970-1ad0bc26b130","Type":"ContainerStarted","Data":"086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8"} Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.902505 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c950f02-8f72-4d89-af10-660187db2344","Type":"ContainerStarted","Data":"63aec11c369ad61a7999b1ef7e24b76754c1794fa57f5c95e1b503d22f0b6aa7"} Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.902688 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.947180 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.447757211 podStartE2EDuration="51.9471595s" podCreationTimestamp="2025-09-30 17:18:07 +0000 UTC" firstStartedPulling="2025-09-30 17:18:25.244542753 +0000 UTC m=+901.149588687" lastFinishedPulling="2025-09-30 17:18:26.743945032 +0000 UTC m=+902.648990976" observedRunningTime="2025-09-30 17:18:58.92419721 +0000 UTC m=+934.829243154" watchObservedRunningTime="2025-09-30 17:18:58.9471595 +0000 UTC m=+934.852205444" Sep 30 17:18:58 crc kubenswrapper[4821]: I0930 17:18:58.970864 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.64256587 podStartE2EDuration="51.970844919s" podCreationTimestamp="2025-09-30 17:18:07 +0000 UTC" firstStartedPulling="2025-09-30 17:18:14.416981956 +0000 UTC m=+890.322027900" lastFinishedPulling="2025-09-30 17:18:26.745261005 +0000 UTC m=+902.650306949" observedRunningTime="2025-09-30 17:18:58.965495506 +0000 UTC m=+934.870541450" watchObservedRunningTime="2025-09-30 17:18:58.970844919 +0000 UTC m=+934.875890863" Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.210852 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fmj6j" Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.344810 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht8pn\" (UniqueName: \"kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn\") pod \"4b9935cd-21ae-4020-b970-1ad0bc26b130\" (UID: \"4b9935cd-21ae-4020-b970-1ad0bc26b130\") " Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.351306 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn" (OuterVolumeSpecName: "kube-api-access-ht8pn") pod "4b9935cd-21ae-4020-b970-1ad0bc26b130" (UID: "4b9935cd-21ae-4020-b970-1ad0bc26b130"). InnerVolumeSpecName "kube-api-access-ht8pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.446844 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht8pn\" (UniqueName: \"kubernetes.io/projected/4b9935cd-21ae-4020-b970-1ad0bc26b130-kube-api-access-ht8pn\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.915681 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fmj6j" Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.915673 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fmj6j" event={"ID":"4b9935cd-21ae-4020-b970-1ad0bc26b130","Type":"ContainerDied","Data":"086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8"} Sep 30 17:19:00 crc kubenswrapper[4821]: I0930 17:19:00.916244 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086ae85e4c0bf4ee49e6f2518aff792f3709041ac93a0649161a7b924e15a8e8" Sep 30 17:19:01 crc kubenswrapper[4821]: I0930 17:19:01.949784 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-df7p4"] Sep 30 17:19:01 crc kubenswrapper[4821]: E0930 17:19:01.950480 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9935cd-21ae-4020-b970-1ad0bc26b130" containerName="mariadb-database-create" Sep 30 17:19:01 crc kubenswrapper[4821]: I0930 17:19:01.950497 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9935cd-21ae-4020-b970-1ad0bc26b130" containerName="mariadb-database-create" Sep 30 17:19:01 crc kubenswrapper[4821]: I0930 17:19:01.950685 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9935cd-21ae-4020-b970-1ad0bc26b130" containerName="mariadb-database-create" Sep 30 17:19:01 crc kubenswrapper[4821]: I0930 17:19:01.951284 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:01 crc kubenswrapper[4821]: I0930 17:19:01.964016 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-df7p4"] Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.073103 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfc5p\" (UniqueName: \"kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p\") pod \"keystone-db-create-df7p4\" (UID: \"21178c4c-b5c9-4507-a186-096e63f59c93\") " pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.174803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfc5p\" (UniqueName: \"kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p\") pod \"keystone-db-create-df7p4\" (UID: \"21178c4c-b5c9-4507-a186-096e63f59c93\") " pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.191788 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfc5p\" (UniqueName: \"kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p\") pod \"keystone-db-create-df7p4\" (UID: \"21178c4c-b5c9-4507-a186-096e63f59c93\") " pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.292585 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.380372 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-24ab-account-create-779jc"] Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.381850 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.384004 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.394358 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-24ab-account-create-779jc"] Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.478098 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv95\" (UniqueName: \"kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95\") pod \"placement-24ab-account-create-779jc\" (UID: \"17cc7c47-8e87-4cdd-b233-605733bd7444\") " pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.580656 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv95\" (UniqueName: \"kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95\") pod \"placement-24ab-account-create-779jc\" (UID: \"17cc7c47-8e87-4cdd-b233-605733bd7444\") " pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.604109 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv95\" (UniqueName: \"kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95\") pod \"placement-24ab-account-create-779jc\" (UID: \"17cc7c47-8e87-4cdd-b233-605733bd7444\") " pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.714377 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.803506 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-df7p4"] Sep 30 17:19:02 crc kubenswrapper[4821]: W0930 17:19:02.819228 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21178c4c_b5c9_4507_a186_096e63f59c93.slice/crio-2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776 WatchSource:0}: Error finding container 2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776: Status 404 returned error can't find the container with id 2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776 Sep 30 17:19:02 crc kubenswrapper[4821]: I0930 17:19:02.940754 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-df7p4" event={"ID":"21178c4c-b5c9-4507-a186-096e63f59c93","Type":"ContainerStarted","Data":"2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776"} Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.175867 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-24ab-account-create-779jc"] Sep 30 17:19:03 crc kubenswrapper[4821]: W0930 17:19:03.178865 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cc7c47_8e87_4cdd_b233_605733bd7444.slice/crio-c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876 WatchSource:0}: Error finding container c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876: Status 404 returned error can't find the container with id c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876 Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.606412 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.949899 4821 generic.go:334] "Generic (PLEG): container finished" podID="21178c4c-b5c9-4507-a186-096e63f59c93" containerID="bc5632130b2444ef6061bcf525d53b20c45f0530d414a60bc0f0e320400213ee" exitCode=0 Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.950282 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-df7p4" event={"ID":"21178c4c-b5c9-4507-a186-096e63f59c93","Type":"ContainerDied","Data":"bc5632130b2444ef6061bcf525d53b20c45f0530d414a60bc0f0e320400213ee"} Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.951919 4821 generic.go:334] "Generic (PLEG): container finished" podID="17cc7c47-8e87-4cdd-b233-605733bd7444" containerID="ea94d2a9ec6e1fc9b4ce6212ffe1316bb4d6debd03f2ded2c6eb1458a8c21e93" exitCode=0 Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.951965 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24ab-account-create-779jc" event={"ID":"17cc7c47-8e87-4cdd-b233-605733bd7444","Type":"ContainerDied","Data":"ea94d2a9ec6e1fc9b4ce6212ffe1316bb4d6debd03f2ded2c6eb1458a8c21e93"} Sep 30 17:19:03 crc kubenswrapper[4821]: I0930 17:19:03.951991 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24ab-account-create-779jc" event={"ID":"17cc7c47-8e87-4cdd-b233-605733bd7444","Type":"ContainerStarted","Data":"c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876"} Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.385813 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.391419 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.531625 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvv95\" (UniqueName: \"kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95\") pod \"17cc7c47-8e87-4cdd-b233-605733bd7444\" (UID: \"17cc7c47-8e87-4cdd-b233-605733bd7444\") " Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.531710 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfc5p\" (UniqueName: \"kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p\") pod \"21178c4c-b5c9-4507-a186-096e63f59c93\" (UID: \"21178c4c-b5c9-4507-a186-096e63f59c93\") " Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.552928 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95" (OuterVolumeSpecName: "kube-api-access-mvv95") pod "17cc7c47-8e87-4cdd-b233-605733bd7444" (UID: "17cc7c47-8e87-4cdd-b233-605733bd7444"). InnerVolumeSpecName "kube-api-access-mvv95". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.556696 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p" (OuterVolumeSpecName: "kube-api-access-jfc5p") pod "21178c4c-b5c9-4507-a186-096e63f59c93" (UID: "21178c4c-b5c9-4507-a186-096e63f59c93"). InnerVolumeSpecName "kube-api-access-jfc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.633812 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvv95\" (UniqueName: \"kubernetes.io/projected/17cc7c47-8e87-4cdd-b233-605733bd7444-kube-api-access-mvv95\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.633857 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfc5p\" (UniqueName: \"kubernetes.io/projected/21178c4c-b5c9-4507-a186-096e63f59c93-kube-api-access-jfc5p\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.968055 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-df7p4" event={"ID":"21178c4c-b5c9-4507-a186-096e63f59c93","Type":"ContainerDied","Data":"2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776"} Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.968371 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c89ccd1e8af11fa6c1af7e9c185a63cfddbe9b7c50add882294e47335707776" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.968127 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-df7p4" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.969428 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-24ab-account-create-779jc" event={"ID":"17cc7c47-8e87-4cdd-b233-605733bd7444","Type":"ContainerDied","Data":"c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876"} Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.969465 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24afacb200c6394d0eec932f2e5a508dcaae9afa4cc90abadfab15b766ff876" Sep 30 17:19:05 crc kubenswrapper[4821]: I0930 17:19:05.969478 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-24ab-account-create-779jc" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.681399 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8c12-account-create-pdc6j"] Sep 30 17:19:07 crc kubenswrapper[4821]: E0930 17:19:07.681761 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17cc7c47-8e87-4cdd-b233-605733bd7444" containerName="mariadb-account-create" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.681775 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cc7c47-8e87-4cdd-b233-605733bd7444" containerName="mariadb-account-create" Sep 30 17:19:07 crc kubenswrapper[4821]: E0930 17:19:07.681808 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21178c4c-b5c9-4507-a186-096e63f59c93" containerName="mariadb-database-create" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.681815 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="21178c4c-b5c9-4507-a186-096e63f59c93" containerName="mariadb-database-create" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.681984 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="21178c4c-b5c9-4507-a186-096e63f59c93" containerName="mariadb-database-create" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.682008 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="17cc7c47-8e87-4cdd-b233-605733bd7444" containerName="mariadb-account-create" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.682645 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.684475 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.693765 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8c12-account-create-pdc6j"] Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.769269 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjgp\" (UniqueName: \"kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp\") pod \"glance-8c12-account-create-pdc6j\" (UID: \"30d940f8-4f55-4c00-a99c-e918eb97c401\") " pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.870803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjgp\" (UniqueName: \"kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp\") pod \"glance-8c12-account-create-pdc6j\" (UID: \"30d940f8-4f55-4c00-a99c-e918eb97c401\") " pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:07 crc kubenswrapper[4821]: I0930 17:19:07.891073 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjgp\" (UniqueName: \"kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp\") pod \"glance-8c12-account-create-pdc6j\" (UID: \"30d940f8-4f55-4c00-a99c-e918eb97c401\") " pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.045379 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.291093 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8c12-account-create-pdc6j"] Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.532868 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4bf4d" podUID="bb53300f-a5be-4cf1-a5db-7847ae0d7e12" containerName="ovn-controller" probeResult="failure" output=< Sep 30 17:19:08 crc kubenswrapper[4821]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Sep 30 17:19:08 crc kubenswrapper[4821]: > Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.561329 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.571880 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-72fq8" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.769812 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4bf4d-config-h4bxk"] Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.770975 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.797768 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.809411 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4bf4d-config-h4bxk"] Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.891713 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trx9x\" (UniqueName: \"kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.891770 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.891942 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.892040 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.892073 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.892193 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.937291 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.990857 4821 generic.go:334] "Generic (PLEG): container finished" podID="30d940f8-4f55-4c00-a99c-e918eb97c401" containerID="d17081a9494e9b22bcc37842380d01c1e3f9a28a80bee9c39bcd8b61ab67a8ce" exitCode=0 Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.990936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c12-account-create-pdc6j" event={"ID":"30d940f8-4f55-4c00-a99c-e918eb97c401","Type":"ContainerDied","Data":"d17081a9494e9b22bcc37842380d01c1e3f9a28a80bee9c39bcd8b61ab67a8ce"} Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.991001 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c12-account-create-pdc6j" event={"ID":"30d940f8-4f55-4c00-a99c-e918eb97c401","Type":"ContainerStarted","Data":"c8b63f4e65eb3caceeb27689710a3862c2b509edf54e8d4596c4afc030177ede"} Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.993596 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.993858 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trx9x\" (UniqueName: \"kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.993911 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994216 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994302 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994342 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994356 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.994522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.995151 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:08 crc kubenswrapper[4821]: I0930 17:19:08.996668 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.023037 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trx9x\" (UniqueName: \"kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x\") pod \"ovn-controller-4bf4d-config-h4bxk\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.136722 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.206247 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h9rqx"] Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.211996 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.250309 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h9rqx"] Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.299888 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhwhv\" (UniqueName: \"kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv\") pod \"cinder-db-create-h9rqx\" (UID: \"9c66275b-5969-4d5b-94b3-e5b7af477685\") " pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.331328 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.404045 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhwhv\" (UniqueName: \"kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv\") pod \"cinder-db-create-h9rqx\" (UID: \"9c66275b-5969-4d5b-94b3-e5b7af477685\") " pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.437071 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhwhv\" (UniqueName: \"kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv\") pod \"cinder-db-create-h9rqx\" (UID: \"9c66275b-5969-4d5b-94b3-e5b7af477685\") " pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.507212 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qvvlk"] Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.508630 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.514904 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qvvlk"] Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.551910 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.604204 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4bf4d-config-h4bxk"] Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.608450 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncxb\" (UniqueName: \"kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb\") pod \"neutron-db-create-qvvlk\" (UID: \"77c144f3-5510-466a-b6d7-8a66896a5a89\") " pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.709444 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncxb\" (UniqueName: \"kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb\") pod \"neutron-db-create-qvvlk\" (UID: \"77c144f3-5510-466a-b6d7-8a66896a5a89\") " pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.725255 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncxb\" (UniqueName: \"kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb\") pod \"neutron-db-create-qvvlk\" (UID: \"77c144f3-5510-466a-b6d7-8a66896a5a89\") " pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.835590 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.999314 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d-config-h4bxk" event={"ID":"0b8d7364-e2e1-4120-9aae-017d3d3232f3","Type":"ContainerStarted","Data":"1f0e33029c22e12918aec4bf410912fa3149868dd983dca33f9a1188845cf66a"} Sep 30 17:19:09 crc kubenswrapper[4821]: I0930 17:19:09.999352 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d-config-h4bxk" event={"ID":"0b8d7364-e2e1-4120-9aae-017d3d3232f3","Type":"ContainerStarted","Data":"796ca8df9e95d59a92a1d70b1e826179820db73815f9ba767e465028d9d96938"} Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.020483 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4bf4d-config-h4bxk" podStartSLOduration=2.020465745 podStartE2EDuration="2.020465745s" podCreationTimestamp="2025-09-30 17:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:10.017662405 +0000 UTC m=+945.922708349" watchObservedRunningTime="2025-09-30 17:19:10.020465745 +0000 UTC m=+945.925511689" Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.094913 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h9rqx"] Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.122174 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qvvlk"] Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.376141 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.522684 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjgp\" (UniqueName: \"kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp\") pod \"30d940f8-4f55-4c00-a99c-e918eb97c401\" (UID: \"30d940f8-4f55-4c00-a99c-e918eb97c401\") " Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.527817 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp" (OuterVolumeSpecName: "kube-api-access-4fjgp") pod "30d940f8-4f55-4c00-a99c-e918eb97c401" (UID: "30d940f8-4f55-4c00-a99c-e918eb97c401"). InnerVolumeSpecName "kube-api-access-4fjgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:10 crc kubenswrapper[4821]: I0930 17:19:10.625197 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjgp\" (UniqueName: \"kubernetes.io/projected/30d940f8-4f55-4c00-a99c-e918eb97c401-kube-api-access-4fjgp\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.006313 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8c12-account-create-pdc6j" event={"ID":"30d940f8-4f55-4c00-a99c-e918eb97c401","Type":"ContainerDied","Data":"c8b63f4e65eb3caceeb27689710a3862c2b509edf54e8d4596c4afc030177ede"} Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.006334 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8c12-account-create-pdc6j" Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.006361 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b63f4e65eb3caceeb27689710a3862c2b509edf54e8d4596c4afc030177ede" Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.008609 4821 generic.go:334] "Generic (PLEG): container finished" podID="77c144f3-5510-466a-b6d7-8a66896a5a89" containerID="8817fb7a7429171b786358c114c47effa44284cd366c3e9219d699d2b5f40178" exitCode=0 Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.008675 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qvvlk" event={"ID":"77c144f3-5510-466a-b6d7-8a66896a5a89","Type":"ContainerDied","Data":"8817fb7a7429171b786358c114c47effa44284cd366c3e9219d699d2b5f40178"} Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.008699 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qvvlk" event={"ID":"77c144f3-5510-466a-b6d7-8a66896a5a89","Type":"ContainerStarted","Data":"dcc1fcbcb42081e69ea1622f3207560f6f906f3d48ff1e21f48db2b042ec1199"} Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.010408 4821 generic.go:334] "Generic (PLEG): container finished" podID="0b8d7364-e2e1-4120-9aae-017d3d3232f3" containerID="1f0e33029c22e12918aec4bf410912fa3149868dd983dca33f9a1188845cf66a" exitCode=0 Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.010528 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d-config-h4bxk" event={"ID":"0b8d7364-e2e1-4120-9aae-017d3d3232f3","Type":"ContainerDied","Data":"1f0e33029c22e12918aec4bf410912fa3149868dd983dca33f9a1188845cf66a"} Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.012493 4821 generic.go:334] "Generic (PLEG): container finished" podID="9c66275b-5969-4d5b-94b3-e5b7af477685" containerID="e77a8a506ac11ebdf511feb7b6a60516b9aef94db6df12a16013b03a3176e3ce" exitCode=0 Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.012522 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h9rqx" event={"ID":"9c66275b-5969-4d5b-94b3-e5b7af477685","Type":"ContainerDied","Data":"e77a8a506ac11ebdf511feb7b6a60516b9aef94db6df12a16013b03a3176e3ce"} Sep 30 17:19:11 crc kubenswrapper[4821]: I0930 17:19:11.012591 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h9rqx" event={"ID":"9c66275b-5969-4d5b-94b3-e5b7af477685","Type":"ContainerStarted","Data":"643e6900ce2e1f8b4e63a7ee9a31522cee70e2051212b8254958dd481632d580"} Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.073936 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-46a4-account-create-qkt5h"] Sep 30 17:19:12 crc kubenswrapper[4821]: E0930 17:19:12.074510 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d940f8-4f55-4c00-a99c-e918eb97c401" containerName="mariadb-account-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.074522 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d940f8-4f55-4c00-a99c-e918eb97c401" containerName="mariadb-account-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.074688 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d940f8-4f55-4c00-a99c-e918eb97c401" containerName="mariadb-account-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.075177 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.088245 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.152414 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tl4\" (UniqueName: \"kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4\") pod \"keystone-46a4-account-create-qkt5h\" (UID: \"26bf384d-37cd-463e-b991-e4ba8646dd99\") " pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.163024 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-46a4-account-create-qkt5h"] Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.254266 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tl4\" (UniqueName: \"kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4\") pod \"keystone-46a4-account-create-qkt5h\" (UID: \"26bf384d-37cd-463e-b991-e4ba8646dd99\") " pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.276720 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tl4\" (UniqueName: \"kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4\") pod \"keystone-46a4-account-create-qkt5h\" (UID: \"26bf384d-37cd-463e-b991-e4ba8646dd99\") " pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.369272 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.429806 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.457054 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhwhv\" (UniqueName: \"kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv\") pod \"9c66275b-5969-4d5b-94b3-e5b7af477685\" (UID: \"9c66275b-5969-4d5b-94b3-e5b7af477685\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.462327 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv" (OuterVolumeSpecName: "kube-api-access-lhwhv") pod "9c66275b-5969-4d5b-94b3-e5b7af477685" (UID: "9c66275b-5969-4d5b-94b3-e5b7af477685"). InnerVolumeSpecName "kube-api-access-lhwhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.508329 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.517553 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558017 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558398 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558432 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trx9x\" (UniqueName: \"kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558512 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558535 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558595 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts\") pod \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\" (UID: \"0b8d7364-e2e1-4120-9aae-017d3d3232f3\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.558940 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhwhv\" (UniqueName: \"kubernetes.io/projected/9c66275b-5969-4d5b-94b3-e5b7af477685-kube-api-access-lhwhv\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.559711 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.559773 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.559794 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run" (OuterVolumeSpecName: "var-run") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.560408 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts" (OuterVolumeSpecName: "scripts") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.560696 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.566108 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x" (OuterVolumeSpecName: "kube-api-access-trx9x") pod "0b8d7364-e2e1-4120-9aae-017d3d3232f3" (UID: "0b8d7364-e2e1-4120-9aae-017d3d3232f3"). InnerVolumeSpecName "kube-api-access-trx9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660135 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bncxb\" (UniqueName: \"kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb\") pod \"77c144f3-5510-466a-b6d7-8a66896a5a89\" (UID: \"77c144f3-5510-466a-b6d7-8a66896a5a89\") " Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660599 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660612 4821 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8d7364-e2e1-4120-9aae-017d3d3232f3-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660623 4821 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660633 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trx9x\" (UniqueName: \"kubernetes.io/projected/0b8d7364-e2e1-4120-9aae-017d3d3232f3-kube-api-access-trx9x\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660642 4821 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.660650 4821 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b8d7364-e2e1-4120-9aae-017d3d3232f3-var-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.663259 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb" (OuterVolumeSpecName: "kube-api-access-bncxb") pod "77c144f3-5510-466a-b6d7-8a66896a5a89" (UID: "77c144f3-5510-466a-b6d7-8a66896a5a89"). InnerVolumeSpecName "kube-api-access-bncxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.765972 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bncxb\" (UniqueName: \"kubernetes.io/projected/77c144f3-5510-466a-b6d7-8a66896a5a89-kube-api-access-bncxb\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.800743 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tmcbt"] Sep 30 17:19:12 crc kubenswrapper[4821]: E0930 17:19:12.801286 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c144f3-5510-466a-b6d7-8a66896a5a89" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801391 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c144f3-5510-466a-b6d7-8a66896a5a89" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: E0930 17:19:12.801454 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8d7364-e2e1-4120-9aae-017d3d3232f3" containerName="ovn-config" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801511 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8d7364-e2e1-4120-9aae-017d3d3232f3" containerName="ovn-config" Sep 30 17:19:12 crc kubenswrapper[4821]: E0930 17:19:12.801571 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c66275b-5969-4d5b-94b3-e5b7af477685" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801633 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c66275b-5969-4d5b-94b3-e5b7af477685" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801836 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c144f3-5510-466a-b6d7-8a66896a5a89" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801898 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8d7364-e2e1-4120-9aae-017d3d3232f3" containerName="ovn-config" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.801961 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c66275b-5969-4d5b-94b3-e5b7af477685" containerName="mariadb-database-create" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.802504 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.804726 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.804902 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8kj9b" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.817988 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tmcbt"] Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.867321 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.867443 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.867586 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.867647 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvw47\" (UniqueName: \"kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.918098 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-46a4-account-create-qkt5h"] Sep 30 17:19:12 crc kubenswrapper[4821]: W0930 17:19:12.925718 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26bf384d_37cd_463e_b991_e4ba8646dd99.slice/crio-c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541 WatchSource:0}: Error finding container c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541: Status 404 returned error can't find the container with id c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541 Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.969459 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.969547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.969618 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.969649 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvw47\" (UniqueName: \"kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.975282 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.976887 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.978203 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:12 crc kubenswrapper[4821]: I0930 17:19:12.990020 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvw47\" (UniqueName: \"kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47\") pod \"glance-db-sync-tmcbt\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.029250 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qvvlk" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.029424 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qvvlk" event={"ID":"77c144f3-5510-466a-b6d7-8a66896a5a89","Type":"ContainerDied","Data":"dcc1fcbcb42081e69ea1622f3207560f6f906f3d48ff1e21f48db2b042ec1199"} Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.029850 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc1fcbcb42081e69ea1622f3207560f6f906f3d48ff1e21f48db2b042ec1199" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.031897 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4bf4d-config-h4bxk" event={"ID":"0b8d7364-e2e1-4120-9aae-017d3d3232f3","Type":"ContainerDied","Data":"796ca8df9e95d59a92a1d70b1e826179820db73815f9ba767e465028d9d96938"} Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.031947 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796ca8df9e95d59a92a1d70b1e826179820db73815f9ba767e465028d9d96938" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.032118 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4bf4d-config-h4bxk" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.035715 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h9rqx" event={"ID":"9c66275b-5969-4d5b-94b3-e5b7af477685","Type":"ContainerDied","Data":"643e6900ce2e1f8b4e63a7ee9a31522cee70e2051212b8254958dd481632d580"} Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.035747 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643e6900ce2e1f8b4e63a7ee9a31522cee70e2051212b8254958dd481632d580" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.037124 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h9rqx" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.039132 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-46a4-account-create-qkt5h" event={"ID":"26bf384d-37cd-463e-b991-e4ba8646dd99","Type":"ContainerStarted","Data":"c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541"} Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.115997 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.140217 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4bf4d-config-h4bxk"] Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.148301 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4bf4d-config-h4bxk"] Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.538977 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4bf4d" Sep 30 17:19:13 crc kubenswrapper[4821]: I0930 17:19:13.679250 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tmcbt"] Sep 30 17:19:14 crc kubenswrapper[4821]: I0930 17:19:14.047120 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tmcbt" event={"ID":"76b2a1e3-7600-4e3f-a2ec-91983582bfa0","Type":"ContainerStarted","Data":"89291e9f9347ef419f6641b1249d380dafd6d2554388e79fa27ddba72de60b76"} Sep 30 17:19:14 crc kubenswrapper[4821]: I0930 17:19:14.048877 4821 generic.go:334] "Generic (PLEG): container finished" podID="26bf384d-37cd-463e-b991-e4ba8646dd99" containerID="eb5f276145bbba19b7088a9c8324ea2a937e5f7dffdc32cbc2b0d93314a0f626" exitCode=0 Sep 30 17:19:14 crc kubenswrapper[4821]: I0930 17:19:14.048905 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-46a4-account-create-qkt5h" event={"ID":"26bf384d-37cd-463e-b991-e4ba8646dd99","Type":"ContainerDied","Data":"eb5f276145bbba19b7088a9c8324ea2a937e5f7dffdc32cbc2b0d93314a0f626"} Sep 30 17:19:14 crc kubenswrapper[4821]: I0930 17:19:14.721650 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8d7364-e2e1-4120-9aae-017d3d3232f3" path="/var/lib/kubelet/pods/0b8d7364-e2e1-4120-9aae-017d3d3232f3/volumes" Sep 30 17:19:15 crc kubenswrapper[4821]: I0930 17:19:15.352607 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:15 crc kubenswrapper[4821]: I0930 17:19:15.452412 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7tl4\" (UniqueName: \"kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4\") pod \"26bf384d-37cd-463e-b991-e4ba8646dd99\" (UID: \"26bf384d-37cd-463e-b991-e4ba8646dd99\") " Sep 30 17:19:15 crc kubenswrapper[4821]: I0930 17:19:15.458915 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4" (OuterVolumeSpecName: "kube-api-access-n7tl4") pod "26bf384d-37cd-463e-b991-e4ba8646dd99" (UID: "26bf384d-37cd-463e-b991-e4ba8646dd99"). InnerVolumeSpecName "kube-api-access-n7tl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:15 crc kubenswrapper[4821]: I0930 17:19:15.560558 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7tl4\" (UniqueName: \"kubernetes.io/projected/26bf384d-37cd-463e-b991-e4ba8646dd99-kube-api-access-n7tl4\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:16 crc kubenswrapper[4821]: I0930 17:19:16.067127 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-46a4-account-create-qkt5h" event={"ID":"26bf384d-37cd-463e-b991-e4ba8646dd99","Type":"ContainerDied","Data":"c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541"} Sep 30 17:19:16 crc kubenswrapper[4821]: I0930 17:19:16.067174 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74ce88dcfd276f37513c821d86d319207a1401b2a9c4d15a3e7b1bb99ae8541" Sep 30 17:19:16 crc kubenswrapper[4821]: I0930 17:19:16.067205 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-46a4-account-create-qkt5h" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.648755 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gfht9"] Sep 30 17:19:17 crc kubenswrapper[4821]: E0930 17:19:17.649539 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bf384d-37cd-463e-b991-e4ba8646dd99" containerName="mariadb-account-create" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.649553 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bf384d-37cd-463e-b991-e4ba8646dd99" containerName="mariadb-account-create" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.649822 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bf384d-37cd-463e-b991-e4ba8646dd99" containerName="mariadb-account-create" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.650597 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.654852 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.655035 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkq42" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.655219 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.655361 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.670693 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gfht9"] Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.801870 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrs5\" (UniqueName: \"kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.801942 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.801965 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.903884 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrs5\" (UniqueName: \"kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.903961 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.903984 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.921230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.923487 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrs5\" (UniqueName: \"kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.924918 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data\") pod \"keystone-db-sync-gfht9\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:17 crc kubenswrapper[4821]: I0930 17:19:17.979680 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:18 crc kubenswrapper[4821]: I0930 17:19:18.404058 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gfht9"] Sep 30 17:19:18 crc kubenswrapper[4821]: W0930 17:19:18.409467 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda311a148_21f0_4b76_81f6_c9190d61a8c3.slice/crio-37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e WatchSource:0}: Error finding container 37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e: Status 404 returned error can't find the container with id 37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.098116 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gfht9" event={"ID":"a311a148-21f0-4b76-81f6-c9190d61a8c3","Type":"ContainerStarted","Data":"37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e"} Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.353181 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.353233 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.353276 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.353875 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.353922 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6" gracePeriod=600 Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.354254 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d836-account-create-9k2wv"] Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.355207 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.359475 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.370693 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d836-account-create-9k2wv"] Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.541559 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pns4m\" (UniqueName: \"kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m\") pod \"cinder-d836-account-create-9k2wv\" (UID: \"595e96f3-23bb-4671-a36d-332140fdeb05\") " pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.548524 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9d55-account-create-6fkq5"] Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.551700 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.553682 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.558698 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9d55-account-create-6fkq5"] Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.644009 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6d44\" (UniqueName: \"kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44\") pod \"neutron-9d55-account-create-6fkq5\" (UID: \"e03e9a0d-52a6-4f78-9669-ea77e8d009a0\") " pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.644063 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pns4m\" (UniqueName: \"kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m\") pod \"cinder-d836-account-create-9k2wv\" (UID: \"595e96f3-23bb-4671-a36d-332140fdeb05\") " pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.670691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pns4m\" (UniqueName: \"kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m\") pod \"cinder-d836-account-create-9k2wv\" (UID: \"595e96f3-23bb-4671-a36d-332140fdeb05\") " pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.683455 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.745415 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6d44\" (UniqueName: \"kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44\") pod \"neutron-9d55-account-create-6fkq5\" (UID: \"e03e9a0d-52a6-4f78-9669-ea77e8d009a0\") " pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.765809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6d44\" (UniqueName: \"kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44\") pod \"neutron-9d55-account-create-6fkq5\" (UID: \"e03e9a0d-52a6-4f78-9669-ea77e8d009a0\") " pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:19 crc kubenswrapper[4821]: I0930 17:19:19.874515 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:20 crc kubenswrapper[4821]: I0930 17:19:20.112018 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6" exitCode=0 Sep 30 17:19:20 crc kubenswrapper[4821]: I0930 17:19:20.112057 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6"} Sep 30 17:19:20 crc kubenswrapper[4821]: I0930 17:19:20.112097 4821 scope.go:117] "RemoveContainer" containerID="2f00def1099cd0097896b8c09046872a1da2fa6b07915bdb81dc3ad48b1054ee" Sep 30 17:19:26 crc kubenswrapper[4821]: I0930 17:19:26.365777 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d836-account-create-9k2wv"] Sep 30 17:19:26 crc kubenswrapper[4821]: W0930 17:19:26.381166 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595e96f3_23bb_4671_a36d_332140fdeb05.slice/crio-81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979 WatchSource:0}: Error finding container 81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979: Status 404 returned error can't find the container with id 81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979 Sep 30 17:19:26 crc kubenswrapper[4821]: I0930 17:19:26.386712 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 30 17:19:26 crc kubenswrapper[4821]: I0930 17:19:26.494283 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9d55-account-create-6fkq5"] Sep 30 17:19:26 crc kubenswrapper[4821]: W0930 17:19:26.498634 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03e9a0d_52a6_4f78_9669_ea77e8d009a0.slice/crio-158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30 WatchSource:0}: Error finding container 158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30: Status 404 returned error can't find the container with id 158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30 Sep 30 17:19:26 crc kubenswrapper[4821]: I0930 17:19:26.505675 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.162482 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tmcbt" event={"ID":"76b2a1e3-7600-4e3f-a2ec-91983582bfa0","Type":"ContainerStarted","Data":"bf17ce3e4f6832c77e8836b5040626e70279ca0959c8ed345715aa513f668b2e"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.166019 4821 generic.go:334] "Generic (PLEG): container finished" podID="e03e9a0d-52a6-4f78-9669-ea77e8d009a0" containerID="bdf6b5a2b3f78680a27b29b3d3d3295f665cba81740912d70f221c7049d0db32" exitCode=0 Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.166106 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9d55-account-create-6fkq5" event={"ID":"e03e9a0d-52a6-4f78-9669-ea77e8d009a0","Type":"ContainerDied","Data":"bdf6b5a2b3f78680a27b29b3d3d3295f665cba81740912d70f221c7049d0db32"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.166126 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9d55-account-create-6fkq5" event={"ID":"e03e9a0d-52a6-4f78-9669-ea77e8d009a0","Type":"ContainerStarted","Data":"158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.169028 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.172565 4821 generic.go:334] "Generic (PLEG): container finished" podID="595e96f3-23bb-4671-a36d-332140fdeb05" containerID="41d623733426c4d9b8b2a417804dd99ac7a2c7f7f9bf772a0b593072e55ffa2f" exitCode=0 Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.172635 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d836-account-create-9k2wv" event={"ID":"595e96f3-23bb-4671-a36d-332140fdeb05","Type":"ContainerDied","Data":"41d623733426c4d9b8b2a417804dd99ac7a2c7f7f9bf772a0b593072e55ffa2f"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.172660 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d836-account-create-9k2wv" event={"ID":"595e96f3-23bb-4671-a36d-332140fdeb05","Type":"ContainerStarted","Data":"81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979"} Sep 30 17:19:27 crc kubenswrapper[4821]: I0930 17:19:27.183135 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tmcbt" podStartSLOduration=2.800817153 podStartE2EDuration="15.183117674s" podCreationTimestamp="2025-09-30 17:19:12 +0000 UTC" firstStartedPulling="2025-09-30 17:19:13.696668806 +0000 UTC m=+949.601714750" lastFinishedPulling="2025-09-30 17:19:26.078969327 +0000 UTC m=+961.984015271" observedRunningTime="2025-09-30 17:19:27.179303919 +0000 UTC m=+963.084349873" watchObservedRunningTime="2025-09-30 17:19:27.183117674 +0000 UTC m=+963.088163618" Sep 30 17:19:30 crc kubenswrapper[4821]: I0930 17:19:30.851701 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:30 crc kubenswrapper[4821]: I0930 17:19:30.894646 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.037772 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pns4m\" (UniqueName: \"kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m\") pod \"595e96f3-23bb-4671-a36d-332140fdeb05\" (UID: \"595e96f3-23bb-4671-a36d-332140fdeb05\") " Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.037834 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6d44\" (UniqueName: \"kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44\") pod \"e03e9a0d-52a6-4f78-9669-ea77e8d009a0\" (UID: \"e03e9a0d-52a6-4f78-9669-ea77e8d009a0\") " Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.042432 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44" (OuterVolumeSpecName: "kube-api-access-x6d44") pod "e03e9a0d-52a6-4f78-9669-ea77e8d009a0" (UID: "e03e9a0d-52a6-4f78-9669-ea77e8d009a0"). InnerVolumeSpecName "kube-api-access-x6d44". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.042861 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m" (OuterVolumeSpecName: "kube-api-access-pns4m") pod "595e96f3-23bb-4671-a36d-332140fdeb05" (UID: "595e96f3-23bb-4671-a36d-332140fdeb05"). InnerVolumeSpecName "kube-api-access-pns4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.139990 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pns4m\" (UniqueName: \"kubernetes.io/projected/595e96f3-23bb-4671-a36d-332140fdeb05-kube-api-access-pns4m\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.140142 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6d44\" (UniqueName: \"kubernetes.io/projected/e03e9a0d-52a6-4f78-9669-ea77e8d009a0-kube-api-access-x6d44\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.225650 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d836-account-create-9k2wv" event={"ID":"595e96f3-23bb-4671-a36d-332140fdeb05","Type":"ContainerDied","Data":"81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979"} Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.225689 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81da8344a6dd84ad94b2794cf5d88ccf4439ede44738f37fb3674f2ce6f9a979" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.225704 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d836-account-create-9k2wv" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.228812 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gfht9" event={"ID":"a311a148-21f0-4b76-81f6-c9190d61a8c3","Type":"ContainerStarted","Data":"916c649311d7d146db7ff3ac5f14fd5dbb537ccfa5c1de5bbbaae81a888cb028"} Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.231720 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9d55-account-create-6fkq5" event={"ID":"e03e9a0d-52a6-4f78-9669-ea77e8d009a0","Type":"ContainerDied","Data":"158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30"} Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.231760 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="158e6d2e9f56070cad3ae5076b9cfe1e4fb305ae642b9a6ee802ebaa4783bf30" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.231787 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9d55-account-create-6fkq5" Sep 30 17:19:31 crc kubenswrapper[4821]: I0930 17:19:31.257012 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gfht9" podStartSLOduration=1.904214279 podStartE2EDuration="14.256988206s" podCreationTimestamp="2025-09-30 17:19:17 +0000 UTC" firstStartedPulling="2025-09-30 17:19:18.412655365 +0000 UTC m=+954.317701309" lastFinishedPulling="2025-09-30 17:19:30.765429252 +0000 UTC m=+966.670475236" observedRunningTime="2025-09-30 17:19:31.246437495 +0000 UTC m=+967.151483439" watchObservedRunningTime="2025-09-30 17:19:31.256988206 +0000 UTC m=+967.162034150" Sep 30 17:19:33 crc kubenswrapper[4821]: I0930 17:19:33.247729 4821 generic.go:334] "Generic (PLEG): container finished" podID="76b2a1e3-7600-4e3f-a2ec-91983582bfa0" containerID="bf17ce3e4f6832c77e8836b5040626e70279ca0959c8ed345715aa513f668b2e" exitCode=0 Sep 30 17:19:33 crc kubenswrapper[4821]: I0930 17:19:33.247917 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tmcbt" event={"ID":"76b2a1e3-7600-4e3f-a2ec-91983582bfa0","Type":"ContainerDied","Data":"bf17ce3e4f6832c77e8836b5040626e70279ca0959c8ed345715aa513f668b2e"} Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.258883 4821 generic.go:334] "Generic (PLEG): container finished" podID="a311a148-21f0-4b76-81f6-c9190d61a8c3" containerID="916c649311d7d146db7ff3ac5f14fd5dbb537ccfa5c1de5bbbaae81a888cb028" exitCode=0 Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.258975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gfht9" event={"ID":"a311a148-21f0-4b76-81f6-c9190d61a8c3","Type":"ContainerDied","Data":"916c649311d7d146db7ff3ac5f14fd5dbb537ccfa5c1de5bbbaae81a888cb028"} Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.633313 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.791552 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvw47\" (UniqueName: \"kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47\") pod \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.791694 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle\") pod \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.791725 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data\") pod \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.791774 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data\") pod \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\" (UID: \"76b2a1e3-7600-4e3f-a2ec-91983582bfa0\") " Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.797130 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47" (OuterVolumeSpecName: "kube-api-access-pvw47") pod "76b2a1e3-7600-4e3f-a2ec-91983582bfa0" (UID: "76b2a1e3-7600-4e3f-a2ec-91983582bfa0"). InnerVolumeSpecName "kube-api-access-pvw47". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.798582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "76b2a1e3-7600-4e3f-a2ec-91983582bfa0" (UID: "76b2a1e3-7600-4e3f-a2ec-91983582bfa0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.817425 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76b2a1e3-7600-4e3f-a2ec-91983582bfa0" (UID: "76b2a1e3-7600-4e3f-a2ec-91983582bfa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.839160 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data" (OuterVolumeSpecName: "config-data") pod "76b2a1e3-7600-4e3f-a2ec-91983582bfa0" (UID: "76b2a1e3-7600-4e3f-a2ec-91983582bfa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.893180 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.893214 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvw47\" (UniqueName: \"kubernetes.io/projected/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-kube-api-access-pvw47\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.893224 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:34 crc kubenswrapper[4821]: I0930 17:19:34.893234 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b2a1e3-7600-4e3f-a2ec-91983582bfa0-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.268267 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tmcbt" event={"ID":"76b2a1e3-7600-4e3f-a2ec-91983582bfa0","Type":"ContainerDied","Data":"89291e9f9347ef419f6641b1249d380dafd6d2554388e79fa27ddba72de60b76"} Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.268307 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tmcbt" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.268312 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89291e9f9347ef419f6641b1249d380dafd6d2554388e79fa27ddba72de60b76" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.666175 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:35 crc kubenswrapper[4821]: E0930 17:19:35.666778 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b2a1e3-7600-4e3f-a2ec-91983582bfa0" containerName="glance-db-sync" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.666795 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b2a1e3-7600-4e3f-a2ec-91983582bfa0" containerName="glance-db-sync" Sep 30 17:19:35 crc kubenswrapper[4821]: E0930 17:19:35.666807 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595e96f3-23bb-4671-a36d-332140fdeb05" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.666812 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="595e96f3-23bb-4671-a36d-332140fdeb05" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: E0930 17:19:35.666822 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03e9a0d-52a6-4f78-9669-ea77e8d009a0" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.666828 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03e9a0d-52a6-4f78-9669-ea77e8d009a0" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.666998 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03e9a0d-52a6-4f78-9669-ea77e8d009a0" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.667008 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="595e96f3-23bb-4671-a36d-332140fdeb05" containerName="mariadb-account-create" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.667025 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b2a1e3-7600-4e3f-a2ec-91983582bfa0" containerName="glance-db-sync" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.667786 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.687592 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.747378 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.810452 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data\") pod \"a311a148-21f0-4b76-81f6-c9190d61a8c3\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.810584 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle\") pod \"a311a148-21f0-4b76-81f6-c9190d61a8c3\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.810668 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsrs5\" (UniqueName: \"kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5\") pod \"a311a148-21f0-4b76-81f6-c9190d61a8c3\" (UID: \"a311a148-21f0-4b76-81f6-c9190d61a8c3\") " Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.810965 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd4ll\" (UniqueName: \"kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.811023 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.811095 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.811260 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.811362 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.834279 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5" (OuterVolumeSpecName: "kube-api-access-nsrs5") pod "a311a148-21f0-4b76-81f6-c9190d61a8c3" (UID: "a311a148-21f0-4b76-81f6-c9190d61a8c3"). InnerVolumeSpecName "kube-api-access-nsrs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.873166 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a311a148-21f0-4b76-81f6-c9190d61a8c3" (UID: "a311a148-21f0-4b76-81f6-c9190d61a8c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913202 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd4ll\" (UniqueName: \"kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913336 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913394 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913465 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913515 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913640 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.913683 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsrs5\" (UniqueName: \"kubernetes.io/projected/a311a148-21f0-4b76-81f6-c9190d61a8c3-kube-api-access-nsrs5\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.915895 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.915988 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.916868 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.916869 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.926624 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data" (OuterVolumeSpecName: "config-data") pod "a311a148-21f0-4b76-81f6-c9190d61a8c3" (UID: "a311a148-21f0-4b76-81f6-c9190d61a8c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:35 crc kubenswrapper[4821]: I0930 17:19:35.953471 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd4ll\" (UniqueName: \"kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll\") pod \"dnsmasq-dns-54f9b7b8d9-pppdz\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.000281 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.021862 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a311a148-21f0-4b76-81f6-c9190d61a8c3-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.282426 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gfht9" event={"ID":"a311a148-21f0-4b76-81f6-c9190d61a8c3","Type":"ContainerDied","Data":"37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e"} Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.282721 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f41254e0c30651d39511d83f655cd634d5cecec72a5d782379593cc2ec960e" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.282522 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gfht9" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.291133 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.473530 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.514068 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:36 crc kubenswrapper[4821]: E0930 17:19:36.514408 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a311a148-21f0-4b76-81f6-c9190d61a8c3" containerName="keystone-db-sync" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.514424 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a311a148-21f0-4b76-81f6-c9190d61a8c3" containerName="keystone-db-sync" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.514677 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a311a148-21f0-4b76-81f6-c9190d61a8c3" containerName="keystone-db-sync" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.515514 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.528374 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.529682 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.529742 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7lq7\" (UniqueName: \"kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.529760 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.529868 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.529923 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.541457 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q9mm5"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.542437 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.545196 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.545994 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkq42" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.549050 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.549556 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.556485 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q9mm5"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.631131 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.631477 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7lq7\" (UniqueName: \"kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.631499 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.631519 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.632228 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.632343 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.632401 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.632578 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.632964 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.663380 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7lq7\" (UniqueName: \"kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7\") pod \"dnsmasq-dns-6546db6db7-gdfv8\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737016 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737110 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737145 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737163 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737187 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sgxz\" (UniqueName: \"kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.737240 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.753616 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.754874 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.757740 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.757960 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.761909 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-4zwqx" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.762219 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.797074 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.801541 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kr849"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.802461 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.807964 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.808300 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-44ldx" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.808564 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.830596 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kr849"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.830908 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838027 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838072 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838108 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838140 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838157 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838185 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838210 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnz9\" (UniqueName: \"kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838229 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838247 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838261 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838281 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sgxz\" (UniqueName: \"kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838298 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838329 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838353 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnqt\" (UniqueName: \"kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838369 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838390 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.838424 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.844735 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.849218 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.858812 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.865269 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.881093 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.922868 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sgxz\" (UniqueName: \"kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz\") pod \"keystone-bootstrap-q9mm5\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.931958 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jjbr2"] Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.935782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.947980 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948047 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948076 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948155 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948173 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:36 crc kubenswrapper[4821]: I0930 17:19:36.948255 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnz9\" (UniqueName: \"kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948277 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948299 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948318 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8j4x\" (UniqueName: \"kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948362 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948378 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnqt\" (UniqueName: \"kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948393 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948496 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n62sk" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.948545 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.950492 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.950683 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.950755 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.954473 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jjbr2"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.954949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.966127 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.968150 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.976224 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.977113 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.977552 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.985587 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.993013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnqt\" (UniqueName: \"kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt\") pod \"horizon-7698bcb95c-njjrf\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:36.995582 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnz9\" (UniqueName: \"kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9\") pod \"cinder-db-sync-kr849\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.007576 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.045274 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.050326 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.051720 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8j4x\" (UniqueName: \"kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.051759 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.051811 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.060149 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.068578 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.077460 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.079902 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.085130 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.086489 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.088684 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.109212 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8j4x\" (UniqueName: \"kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x\") pod \"neutron-db-sync-jjbr2\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.138988 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kr849" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.153055 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.153344 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr847\" (UniqueName: \"kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.153374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.153393 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.153459 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.164137 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.208667 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.215480 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.216725 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.221889 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8kj9b" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.227136 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.228876 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.255050 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mk2vd"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256028 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256199 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256237 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr847\" (UniqueName: \"kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256261 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256276 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256291 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256332 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8lq\" (UniqueName: \"kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256368 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256404 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.256447 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.281786 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.285929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.285951 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.286867 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wfntr" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.298798 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.312156 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.312895 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr847\" (UniqueName: \"kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.313248 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.374117 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key\") pod \"horizon-7ddb7997dc-cnx5j\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379039 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379117 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379156 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379195 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xclkq\" (UniqueName: \"kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379224 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379252 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg45l\" (UniqueName: \"kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379287 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379314 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379341 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379378 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379426 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379448 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379475 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379500 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8lq\" (UniqueName: \"kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379550 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379578 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.379602 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.381071 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.381748 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.386637 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mk2vd"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.387736 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.399870 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.414901 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8lq\" (UniqueName: \"kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq\") pod \"dnsmasq-dns-7987f74bbc-znw7f\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.421797 4821 generic.go:334] "Generic (PLEG): container finished" podID="3e72d85b-1d7f-46f0-8c84-7f171d3256d9" containerID="1f3ac85882b3879a3689d8f472fad7420e16db54a49c540d8ca1857e179e1490" exitCode=0 Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.421830 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" event={"ID":"3e72d85b-1d7f-46f0-8c84-7f171d3256d9","Type":"ContainerDied","Data":"1f3ac85882b3879a3689d8f472fad7420e16db54a49c540d8ca1857e179e1490"} Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.421858 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" event={"ID":"3e72d85b-1d7f-46f0-8c84-7f171d3256d9","Type":"ContainerStarted","Data":"60d636487de108b92d7b4545d78f8b9671aa94c1386196786386d794fe272944"} Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.451960 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.453768 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483116 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483196 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483230 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483246 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483263 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483320 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483347 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xclkq\" (UniqueName: \"kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483368 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg45l\" (UniqueName: \"kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483392 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483410 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.483796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.484117 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.484390 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.500238 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.505841 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.506123 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xclkq\" (UniqueName: \"kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.511597 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.512397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.512916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg45l\" (UniqueName: \"kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.513018 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.513814 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.514861 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data\") pod \"placement-db-sync-mk2vd\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.558186 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.700901 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mk2vd" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.853301 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.854711 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.855053 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.858991 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.868553 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897203 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897250 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9ln\" (UniqueName: \"kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897275 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897293 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897328 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.897391 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.998833 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.998873 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.998919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.998963 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.998996 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.999039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.999075 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9ln\" (UniqueName: \"kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.999628 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:37.999849 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.000412 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.006252 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.007147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.008429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.024756 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9ln\" (UniqueName: \"kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.032125 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.218488 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.313109 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.375137 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.406318 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb\") pod \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.406479 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc\") pod \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.406500 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config\") pod \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.406891 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb\") pod \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.407025 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd4ll\" (UniqueName: \"kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll\") pod \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\" (UID: \"3e72d85b-1d7f-46f0-8c84-7f171d3256d9\") " Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.426320 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll" (OuterVolumeSpecName: "kube-api-access-wd4ll") pod "3e72d85b-1d7f-46f0-8c84-7f171d3256d9" (UID: "3e72d85b-1d7f-46f0-8c84-7f171d3256d9"). InnerVolumeSpecName "kube-api-access-wd4ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.428120 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e72d85b-1d7f-46f0-8c84-7f171d3256d9" (UID: "3e72d85b-1d7f-46f0-8c84-7f171d3256d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.446540 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e72d85b-1d7f-46f0-8c84-7f171d3256d9" (UID: "3e72d85b-1d7f-46f0-8c84-7f171d3256d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.466768 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config" (OuterVolumeSpecName: "config") pod "3e72d85b-1d7f-46f0-8c84-7f171d3256d9" (UID: "3e72d85b-1d7f-46f0-8c84-7f171d3256d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.468158 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" event={"ID":"3e72d85b-1d7f-46f0-8c84-7f171d3256d9","Type":"ContainerDied","Data":"60d636487de108b92d7b4545d78f8b9671aa94c1386196786386d794fe272944"} Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.468207 4821 scope.go:117] "RemoveContainer" containerID="1f3ac85882b3879a3689d8f472fad7420e16db54a49c540d8ca1857e179e1490" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.468323 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-pppdz" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.468697 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e72d85b-1d7f-46f0-8c84-7f171d3256d9" (UID: "3e72d85b-1d7f-46f0-8c84-7f171d3256d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.471985 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" event={"ID":"213fee6b-dc65-4947-9549-9ba0035b21f7","Type":"ContainerStarted","Data":"3a6d72a4257b676cfe94d7b035165122620c6c24b1c9eb72450bec5ef0de3b7c"} Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.508639 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd4ll\" (UniqueName: \"kubernetes.io/projected/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-kube-api-access-wd4ll\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.508665 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.508675 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.508683 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.508690 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e72d85b-1d7f-46f0-8c84-7f171d3256d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.677470 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q9mm5"] Sep 30 17:19:38 crc kubenswrapper[4821]: W0930 17:19:38.695045 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd86109cd_7a87_43db_ad27_5669b4b2a03b.slice/crio-b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f WatchSource:0}: Error finding container b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f: Status 404 returned error can't find the container with id b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.730863 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jjbr2"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.732017 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:19:38 crc kubenswrapper[4821]: W0930 17:19:38.732286 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a2b06c3_690a_469e_bdf6_5033d9be88e8.slice/crio-4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1 WatchSource:0}: Error finding container 4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1: Status 404 returned error can't find the container with id 4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1 Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.776650 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:19:38 crc kubenswrapper[4821]: W0930 17:19:38.818033 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415 WatchSource:0}: Error finding container 0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415: Status 404 returned error can't find the container with id 0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415 Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.822873 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.880345 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mk2vd"] Sep 30 17:19:38 crc kubenswrapper[4821]: W0930 17:19:38.897158 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1226c0_ea59_4c57_9837_cafbb926f373.slice/crio-08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd WatchSource:0}: Error finding container 08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd: Status 404 returned error can't find the container with id 08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.949650 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kr849"] Sep 30 17:19:38 crc kubenswrapper[4821]: I0930 17:19:38.988827 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.032116 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.043600 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-pppdz"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.377754 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.444310 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.483856 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:19:39 crc kubenswrapper[4821]: E0930 17:19:39.484210 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e72d85b-1d7f-46f0-8c84-7f171d3256d9" containerName="init" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.484232 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e72d85b-1d7f-46f0-8c84-7f171d3256d9" containerName="init" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.484384 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e72d85b-1d7f-46f0-8c84-7f171d3256d9" containerName="init" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.485216 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.493495 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerStarted","Data":"12e2a8033a20ec42ef5268143cd3d07fdfadb5cfae930f51de7ccd651c3181c1"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.505779 4821 generic.go:334] "Generic (PLEG): container finished" podID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerID="fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b" exitCode=0 Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.505837 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" event={"ID":"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158","Type":"ContainerDied","Data":"fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.505862 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" event={"ID":"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158","Type":"ContainerStarted","Data":"1b30f4e3f9dcc9e59b8680e9541c894b202d9c943d99410f2404b1784708ea4b"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.520509 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q9mm5" event={"ID":"d86109cd-7a87-43db-ad27-5669b4b2a03b","Type":"ContainerStarted","Data":"26b41bd42612ef95a77c5610ac7e0b4e37525a5f648c17072c4581fa06ea382b"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.520552 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q9mm5" event={"ID":"d86109cd-7a87-43db-ad27-5669b4b2a03b","Type":"ContainerStarted","Data":"b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.535383 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kr849" event={"ID":"9aa40c0f-e07d-43de-92d6-60ba8d6b668d","Type":"ContainerStarted","Data":"1e740e737eaa7f15b1e4d974ced66f37ae5c389c6e7007af16626705c6d01699"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.562855 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.562888 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjbr2" event={"ID":"5a2b06c3-690a-469e-bdf6-5033d9be88e8","Type":"ContainerStarted","Data":"34617b1a24cead19921d6337ba964022401b56a3e645b303d906029c2264eb59"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.562904 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjbr2" event={"ID":"5a2b06c3-690a-469e-bdf6-5033d9be88e8","Type":"ContainerStarted","Data":"4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.575269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerStarted","Data":"beb1c7a432107f7a89041076a65ac4dd2de6b48302e9303f1ff81687db95c531"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.581810 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerStarted","Data":"0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.611320 4821 generic.go:334] "Generic (PLEG): container finished" podID="213fee6b-dc65-4947-9549-9ba0035b21f7" containerID="77b461235edfe73f6abb0fb7c967add5eb40b1140642f96d831f43c1a3803664" exitCode=0 Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.611406 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" event={"ID":"213fee6b-dc65-4947-9549-9ba0035b21f7","Type":"ContainerDied","Data":"77b461235edfe73f6abb0fb7c967add5eb40b1140642f96d831f43c1a3803664"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.626116 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mk2vd" event={"ID":"7b1226c0-ea59-4c57-9837-cafbb926f373","Type":"ContainerStarted","Data":"08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd"} Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.642320 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.642387 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.642470 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.642493 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczg9\" (UniqueName: \"kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.642516 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.656183 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.667989 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q9mm5" podStartSLOduration=3.667968688 podStartE2EDuration="3.667968688s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:39.655855166 +0000 UTC m=+975.560901110" watchObservedRunningTime="2025-09-30 17:19:39.667968688 +0000 UTC m=+975.573014632" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.716227 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jjbr2" podStartSLOduration=3.716210648 podStartE2EDuration="3.716210648s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:39.694434117 +0000 UTC m=+975.599480061" watchObservedRunningTime="2025-09-30 17:19:39.716210648 +0000 UTC m=+975.621256592" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.748428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.748517 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.748593 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.748619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczg9\" (UniqueName: \"kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.748643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.749514 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.750135 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.750488 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.777999 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.792435 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczg9\" (UniqueName: \"kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9\") pod \"horizon-9887b9bbf-xcxhr\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.806786 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:19:39 crc kubenswrapper[4821]: I0930 17:19:39.864519 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:39 crc kubenswrapper[4821]: W0930 17:19:39.925948 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76b51e71_c45d_493d_b96c_c742179cf21e.slice/crio-bce2c6b34917f32f43b51e4049f626cbabae5b839731b4d7d8ded51b73febbb6 WatchSource:0}: Error finding container bce2c6b34917f32f43b51e4049f626cbabae5b839731b4d7d8ded51b73febbb6: Status 404 returned error can't find the container with id bce2c6b34917f32f43b51e4049f626cbabae5b839731b4d7d8ded51b73febbb6 Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.037177 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.172686 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb\") pod \"213fee6b-dc65-4947-9549-9ba0035b21f7\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.173004 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config\") pod \"213fee6b-dc65-4947-9549-9ba0035b21f7\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.173028 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb\") pod \"213fee6b-dc65-4947-9549-9ba0035b21f7\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.173209 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc\") pod \"213fee6b-dc65-4947-9549-9ba0035b21f7\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.173237 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7lq7\" (UniqueName: \"kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7\") pod \"213fee6b-dc65-4947-9549-9ba0035b21f7\" (UID: \"213fee6b-dc65-4947-9549-9ba0035b21f7\") " Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.190416 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7" (OuterVolumeSpecName: "kube-api-access-d7lq7") pod "213fee6b-dc65-4947-9549-9ba0035b21f7" (UID: "213fee6b-dc65-4947-9549-9ba0035b21f7"). InnerVolumeSpecName "kube-api-access-d7lq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.208567 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "213fee6b-dc65-4947-9549-9ba0035b21f7" (UID: "213fee6b-dc65-4947-9549-9ba0035b21f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.219975 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "213fee6b-dc65-4947-9549-9ba0035b21f7" (UID: "213fee6b-dc65-4947-9549-9ba0035b21f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.236937 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config" (OuterVolumeSpecName: "config") pod "213fee6b-dc65-4947-9549-9ba0035b21f7" (UID: "213fee6b-dc65-4947-9549-9ba0035b21f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.238016 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "213fee6b-dc65-4947-9549-9ba0035b21f7" (UID: "213fee6b-dc65-4947-9549-9ba0035b21f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.275201 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.275233 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.275245 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.275253 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213fee6b-dc65-4947-9549-9ba0035b21f7-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.275263 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7lq7\" (UniqueName: \"kubernetes.io/projected/213fee6b-dc65-4947-9549-9ba0035b21f7-kube-api-access-d7lq7\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.488893 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.670013 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.672391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-gdfv8" event={"ID":"213fee6b-dc65-4947-9549-9ba0035b21f7","Type":"ContainerDied","Data":"3a6d72a4257b676cfe94d7b035165122620c6c24b1c9eb72450bec5ef0de3b7c"} Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.672566 4821 scope.go:117] "RemoveContainer" containerID="77b461235edfe73f6abb0fb7c967add5eb40b1140642f96d831f43c1a3803664" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.694649 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" event={"ID":"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158","Type":"ContainerStarted","Data":"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0"} Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.695255 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.703190 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerStarted","Data":"bce2c6b34917f32f43b51e4049f626cbabae5b839731b4d7d8ded51b73febbb6"} Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.764898 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e72d85b-1d7f-46f0-8c84-7f171d3256d9" path="/var/lib/kubelet/pods/3e72d85b-1d7f-46f0-8c84-7f171d3256d9/volumes" Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.768654 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerStarted","Data":"871af5db87d95feed97bfc2a989ae523a35a5da449842e27f099ae1a4c36fa3c"} Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.771645 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerStarted","Data":"7e78df174183dfec0fc8626a059b82b9426d889f11b8643f676cfffb8461de3a"} Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.773065 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.812944 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-gdfv8"] Sep 30 17:19:40 crc kubenswrapper[4821]: I0930 17:19:40.825182 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" podStartSLOduration=3.8251609589999997 podStartE2EDuration="3.825160959s" podCreationTimestamp="2025-09-30 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:40.781688177 +0000 UTC m=+976.686734131" watchObservedRunningTime="2025-09-30 17:19:40.825160959 +0000 UTC m=+976.730206903" Sep 30 17:19:41 crc kubenswrapper[4821]: I0930 17:19:41.787491 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerStarted","Data":"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196"} Sep 30 17:19:41 crc kubenswrapper[4821]: I0930 17:19:41.807166 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-log" containerID="cri-o://871af5db87d95feed97bfc2a989ae523a35a5da449842e27f099ae1a4c36fa3c" gracePeriod=30 Sep 30 17:19:41 crc kubenswrapper[4821]: I0930 17:19:41.807457 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerStarted","Data":"06e924972a97e49b488f8c659adce86eccfe3e9001d891cdd8466b7743c2f7da"} Sep 30 17:19:41 crc kubenswrapper[4821]: I0930 17:19:41.807772 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-httpd" containerID="cri-o://06e924972a97e49b488f8c659adce86eccfe3e9001d891cdd8466b7743c2f7da" gracePeriod=30 Sep 30 17:19:41 crc kubenswrapper[4821]: I0930 17:19:41.828850 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.828832471 podStartE2EDuration="4.828832471s" podCreationTimestamp="2025-09-30 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:41.825850307 +0000 UTC m=+977.730896271" watchObservedRunningTime="2025-09-30 17:19:41.828832471 +0000 UTC m=+977.733878415" Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.720897 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213fee6b-dc65-4947-9549-9ba0035b21f7" path="/var/lib/kubelet/pods/213fee6b-dc65-4947-9549-9ba0035b21f7/volumes" Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.825788 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerStarted","Data":"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592"} Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.825915 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-httpd" containerID="cri-o://26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" gracePeriod=30 Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.825914 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-log" containerID="cri-o://5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" gracePeriod=30 Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.830005 4821 generic.go:334] "Generic (PLEG): container finished" podID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerID="06e924972a97e49b488f8c659adce86eccfe3e9001d891cdd8466b7743c2f7da" exitCode=0 Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.830030 4821 generic.go:334] "Generic (PLEG): container finished" podID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerID="871af5db87d95feed97bfc2a989ae523a35a5da449842e27f099ae1a4c36fa3c" exitCode=143 Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.830720 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerDied","Data":"06e924972a97e49b488f8c659adce86eccfe3e9001d891cdd8466b7743c2f7da"} Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.830746 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerDied","Data":"871af5db87d95feed97bfc2a989ae523a35a5da449842e27f099ae1a4c36fa3c"} Sep 30 17:19:42 crc kubenswrapper[4821]: I0930 17:19:42.850118 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.85009555 podStartE2EDuration="6.85009555s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:42.844775769 +0000 UTC m=+978.749821713" watchObservedRunningTime="2025-09-30 17:19:42.85009555 +0000 UTC m=+978.755141494" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.532418 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.650842 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.650880 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.650931 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.651013 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.651072 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.651160 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg45l\" (UniqueName: \"kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.651178 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle\") pod \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\" (UID: \"b3f14e78-c9d7-4274-9e39-5bcedfbfd687\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.654449 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.654520 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs" (OuterVolumeSpecName: "logs") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.659642 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.662525 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts" (OuterVolumeSpecName: "scripts") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.665392 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l" (OuterVolumeSpecName: "kube-api-access-dg45l") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "kube-api-access-dg45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.690722 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.707283 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.724203 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data" (OuterVolumeSpecName: "config-data") pod "b3f14e78-c9d7-4274-9e39-5bcedfbfd687" (UID: "b3f14e78-c9d7-4274-9e39-5bcedfbfd687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754408 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754457 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754519 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754541 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754575 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf9ln\" (UniqueName: \"kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln\") pod \"76b51e71-c45d-493d-b96c-c742179cf21e\" (UID: \"76b51e71-c45d-493d-b96c-c742179cf21e\") " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754894 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754925 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754934 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754943 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg45l\" (UniqueName: \"kubernetes.io/projected/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-kube-api-access-dg45l\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754953 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754961 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.754969 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f14e78-c9d7-4274-9e39-5bcedfbfd687-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.758370 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.759320 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts" (OuterVolumeSpecName: "scripts") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.759613 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs" (OuterVolumeSpecName: "logs") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.763396 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.763706 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln" (OuterVolumeSpecName: "kube-api-access-hf9ln") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "kube-api-access-hf9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.788757 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.801111 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.802536 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data" (OuterVolumeSpecName: "config-data") pod "76b51e71-c45d-493d-b96c-c742179cf21e" (UID: "76b51e71-c45d-493d-b96c-c742179cf21e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847043 4821 generic.go:334] "Generic (PLEG): container finished" podID="76b51e71-c45d-493d-b96c-c742179cf21e" containerID="26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" exitCode=0 Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847115 4821 generic.go:334] "Generic (PLEG): container finished" podID="76b51e71-c45d-493d-b96c-c742179cf21e" containerID="5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" exitCode=143 Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847154 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerDied","Data":"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592"} Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerDied","Data":"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196"} Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847289 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76b51e71-c45d-493d-b96c-c742179cf21e","Type":"ContainerDied","Data":"bce2c6b34917f32f43b51e4049f626cbabae5b839731b4d7d8ded51b73febbb6"} Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847314 4821 scope.go:117] "RemoveContainer" containerID="26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.847966 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.852798 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b3f14e78-c9d7-4274-9e39-5bcedfbfd687","Type":"ContainerDied","Data":"12e2a8033a20ec42ef5268143cd3d07fdfadb5cfae930f51de7ccd651c3181c1"} Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.852901 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856879 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856907 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856918 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf9ln\" (UniqueName: \"kubernetes.io/projected/76b51e71-c45d-493d-b96c-c742179cf21e-kube-api-access-hf9ln\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856929 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856981 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.856991 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.857000 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b51e71-c45d-493d-b96c-c742179cf21e-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.857009 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b51e71-c45d-493d-b96c-c742179cf21e-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.900933 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.908750 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.919064 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.935502 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.959919 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:43 crc kubenswrapper[4821]: I0930 17:19:43.988033 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.020411 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:44 crc kubenswrapper[4821]: E0930 17:19:44.024491 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.024534 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: E0930 17:19:44.024562 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.024569 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: E0930 17:19:44.024585 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.024593 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: E0930 17:19:44.024604 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213fee6b-dc65-4947-9549-9ba0035b21f7" containerName="init" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.024613 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="213fee6b-dc65-4947-9549-9ba0035b21f7" containerName="init" Sep 30 17:19:44 crc kubenswrapper[4821]: E0930 17:19:44.024627 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.024634 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.034272 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.034338 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-log" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.034355 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.034386 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" containerName="glance-httpd" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.034401 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="213fee6b-dc65-4947-9549-9ba0035b21f7" containerName="init" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.036142 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.040247 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8kj9b" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.046682 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.047309 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.060980 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.067224 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.070999 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.081114 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.100859 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.165107 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166357 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166448 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvpg\" (UniqueName: \"kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166466 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166532 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166550 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166687 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqpfr\" (UniqueName: \"kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166871 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166923 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166943 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.166959 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.167021 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.167113 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.269465 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.269912 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270036 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqpfr\" (UniqueName: \"kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270058 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270478 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270727 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270765 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.270781 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.271177 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.272247 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.272876 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.272909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.272956 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274247 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvpg\" (UniqueName: \"kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274268 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274328 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274350 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274556 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.274830 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.277522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.277591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.282430 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.294009 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.294796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.295605 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.304425 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.304867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvpg\" (UniqueName: \"kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.305283 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqpfr\" (UniqueName: \"kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.332381 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.342523 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.386626 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.398153 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.720475 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b51e71-c45d-493d-b96c-c742179cf21e" path="/var/lib/kubelet/pods/76b51e71-c45d-493d-b96c-c742179cf21e/volumes" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.722686 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f14e78-c9d7-4274-9e39-5bcedfbfd687" path="/var/lib/kubelet/pods/b3f14e78-c9d7-4274-9e39-5bcedfbfd687/volumes" Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.863345 4821 generic.go:334] "Generic (PLEG): container finished" podID="d86109cd-7a87-43db-ad27-5669b4b2a03b" containerID="26b41bd42612ef95a77c5610ac7e0b4e37525a5f648c17072c4581fa06ea382b" exitCode=0 Sep 30 17:19:44 crc kubenswrapper[4821]: I0930 17:19:44.863402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q9mm5" event={"ID":"d86109cd-7a87-43db-ad27-5669b4b2a03b","Type":"ContainerDied","Data":"26b41bd42612ef95a77c5610ac7e0b4e37525a5f648c17072c4581fa06ea382b"} Sep 30 17:19:46 crc kubenswrapper[4821]: I0930 17:19:46.949104 4821 scope.go:117] "RemoveContainer" containerID="5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.054289 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132022 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132057 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sgxz\" (UniqueName: \"kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132434 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132477 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.132547 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts\") pod \"d86109cd-7a87-43db-ad27-5669b4b2a03b\" (UID: \"d86109cd-7a87-43db-ad27-5669b4b2a03b\") " Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.185256 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz" (OuterVolumeSpecName: "kube-api-access-7sgxz") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "kube-api-access-7sgxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.189009 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts" (OuterVolumeSpecName: "scripts") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.189524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.189951 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.191305 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.191832 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data" (OuterVolumeSpecName: "config-data") pod "d86109cd-7a87-43db-ad27-5669b4b2a03b" (UID: "d86109cd-7a87-43db-ad27-5669b4b2a03b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234836 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234866 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sgxz\" (UniqueName: \"kubernetes.io/projected/d86109cd-7a87-43db-ad27-5669b4b2a03b-kube-api-access-7sgxz\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234877 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234885 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234893 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.234901 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86109cd-7a87-43db-ad27-5669b4b2a03b-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.442892 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.456211 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.524418 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.524632 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" containerID="cri-o://dd8bf1837b14ce49342b508228007f30436900c3b57e32f16d73b17d9b8f629a" gracePeriod=10 Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.556207 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.764792 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.935759 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q9mm5" event={"ID":"d86109cd-7a87-43db-ad27-5669b4b2a03b","Type":"ContainerDied","Data":"b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f"} Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.935990 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b128a91bd79f2577850e75486e1a15874be034ee5f9b8da1bae3c05985d5f68f" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.936044 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q9mm5" Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.960284 4821 generic.go:334] "Generic (PLEG): container finished" podID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerID="dd8bf1837b14ce49342b508228007f30436900c3b57e32f16d73b17d9b8f629a" exitCode=0 Sep 30 17:19:47 crc kubenswrapper[4821]: I0930 17:19:47.960326 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" event={"ID":"2f2f19e5-bd02-4369-b594-ee71c4c83509","Type":"ContainerDied","Data":"dd8bf1837b14ce49342b508228007f30436900c3b57e32f16d73b17d9b8f629a"} Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.229490 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q9mm5"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.240676 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q9mm5"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.336229 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j72gp"] Sep 30 17:19:48 crc kubenswrapper[4821]: E0930 17:19:48.336542 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86109cd-7a87-43db-ad27-5669b4b2a03b" containerName="keystone-bootstrap" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.336559 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86109cd-7a87-43db-ad27-5669b4b2a03b" containerName="keystone-bootstrap" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.336751 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86109cd-7a87-43db-ad27-5669b4b2a03b" containerName="keystone-bootstrap" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.338794 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.341499 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkq42" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.341768 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.342656 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.342869 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.346936 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j72gp"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.456783 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.456835 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.457012 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.457368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.457416 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.457551 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcvq\" (UniqueName: \"kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.559622 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcvq\" (UniqueName: \"kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.559673 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.559691 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.559732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.560156 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.560180 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.567684 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.567737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.568140 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.569396 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.579736 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcvq\" (UniqueName: \"kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.581400 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle\") pod \"keystone-bootstrap-j72gp\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.661375 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.707390 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.728493 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86109cd-7a87-43db-ad27-5669b4b2a03b" path="/var/lib/kubelet/pods/d86109cd-7a87-43db-ad27-5669b4b2a03b/volumes" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.764234 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.765648 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.768723 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.800435 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.897838 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.897889 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.897930 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.897984 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.898008 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.898024 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kp6\" (UniqueName: \"kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.898093 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.904055 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.910329 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78b9594fb8-nw9qj"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.912744 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.916323 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78b9594fb8-nw9qj"] Sep 30 17:19:48 crc kubenswrapper[4821]: I0930 17:19:48.999521 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999571 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999620 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-tls-certs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999641 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be55b7f-8f57-44f9-899b-d8e6676e5e02-logs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999666 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999728 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-secret-key\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999765 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999790 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999805 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kp6\" (UniqueName: \"kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999829 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-combined-ca-bundle\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999855 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999882 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkhd\" (UniqueName: \"kubernetes.io/projected/4be55b7f-8f57-44f9-899b-d8e6676e5e02-kube-api-access-chkhd\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-config-data\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:48.999926 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-scripts\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.000272 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.000349 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.001480 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.011780 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.011812 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.018003 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.033515 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kp6\" (UniqueName: \"kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6\") pod \"horizon-5b974b45dd-mbzvm\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.103783 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-tls-certs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.103834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be55b7f-8f57-44f9-899b-d8e6676e5e02-logs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.103950 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-secret-key\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.104043 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-combined-ca-bundle\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.104114 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chkhd\" (UniqueName: \"kubernetes.io/projected/4be55b7f-8f57-44f9-899b-d8e6676e5e02-kube-api-access-chkhd\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.104153 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-config-data\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.104181 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-scripts\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.105003 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-scripts\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.105264 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be55b7f-8f57-44f9-899b-d8e6676e5e02-logs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.106919 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be55b7f-8f57-44f9-899b-d8e6676e5e02-config-data\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.108314 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.109309 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-secret-key\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.109431 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-horizon-tls-certs\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.109592 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be55b7f-8f57-44f9-899b-d8e6676e5e02-combined-ca-bundle\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.124389 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chkhd\" (UniqueName: \"kubernetes.io/projected/4be55b7f-8f57-44f9-899b-d8e6676e5e02-kube-api-access-chkhd\") pod \"horizon-78b9594fb8-nw9qj\" (UID: \"4be55b7f-8f57-44f9-899b-d8e6676e5e02\") " pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:49 crc kubenswrapper[4821]: I0930 17:19:49.245410 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:19:52 crc kubenswrapper[4821]: I0930 17:19:52.761023 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.675398 4821 scope.go:117] "RemoveContainer" containerID="26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" Sep 30 17:20:01 crc kubenswrapper[4821]: E0930 17:20:01.676663 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592\": container with ID starting with 26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592 not found: ID does not exist" containerID="26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.676713 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592"} err="failed to get container status \"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592\": rpc error: code = NotFound desc = could not find container \"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592\": container with ID starting with 26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592 not found: ID does not exist" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.676746 4821 scope.go:117] "RemoveContainer" containerID="5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" Sep 30 17:20:01 crc kubenswrapper[4821]: E0930 17:20:01.677440 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196\": container with ID starting with 5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196 not found: ID does not exist" containerID="5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.677482 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196"} err="failed to get container status \"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196\": rpc error: code = NotFound desc = could not find container \"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196\": container with ID starting with 5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196 not found: ID does not exist" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.677506 4821 scope.go:117] "RemoveContainer" containerID="26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.677780 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592"} err="failed to get container status \"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592\": rpc error: code = NotFound desc = could not find container \"26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592\": container with ID starting with 26a780bbb587cad58805be2a1602dbe3be7ce350480c3d0f8e98da2588afb592 not found: ID does not exist" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.677812 4821 scope.go:117] "RemoveContainer" containerID="5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.678241 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196"} err="failed to get container status \"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196\": rpc error: code = NotFound desc = could not find container \"5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196\": container with ID starting with 5086c5be9613bfcf0ccaa9f5791f776b824d9b2b81a75df408a9c9fc44b97196 not found: ID does not exist" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.678281 4821 scope.go:117] "RemoveContainer" containerID="06e924972a97e49b488f8c659adce86eccfe3e9001d891cdd8466b7743c2f7da" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.799428 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.939417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb\") pod \"2f2f19e5-bd02-4369-b594-ee71c4c83509\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.939480 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgv7\" (UniqueName: \"kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7\") pod \"2f2f19e5-bd02-4369-b594-ee71c4c83509\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.939502 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config\") pod \"2f2f19e5-bd02-4369-b594-ee71c4c83509\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.939516 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb\") pod \"2f2f19e5-bd02-4369-b594-ee71c4c83509\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.939618 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc\") pod \"2f2f19e5-bd02-4369-b594-ee71c4c83509\" (UID: \"2f2f19e5-bd02-4369-b594-ee71c4c83509\") " Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.948259 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7" (OuterVolumeSpecName: "kube-api-access-9cgv7") pod "2f2f19e5-bd02-4369-b594-ee71c4c83509" (UID: "2f2f19e5-bd02-4369-b594-ee71c4c83509"). InnerVolumeSpecName "kube-api-access-9cgv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.990375 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config" (OuterVolumeSpecName: "config") pod "2f2f19e5-bd02-4369-b594-ee71c4c83509" (UID: "2f2f19e5-bd02-4369-b594-ee71c4c83509"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:01 crc kubenswrapper[4821]: I0930 17:20:01.999590 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f2f19e5-bd02-4369-b594-ee71c4c83509" (UID: "2f2f19e5-bd02-4369-b594-ee71c4c83509"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.010698 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f2f19e5-bd02-4369-b594-ee71c4c83509" (UID: "2f2f19e5-bd02-4369-b594-ee71c4c83509"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.010809 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f2f19e5-bd02-4369-b594-ee71c4c83509" (UID: "2f2f19e5-bd02-4369-b594-ee71c4c83509"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.041447 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.041825 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.042606 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cgv7\" (UniqueName: \"kubernetes.io/projected/2f2f19e5-bd02-4369-b594-ee71c4c83509-kube-api-access-9cgv7\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.042795 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.042868 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2f19e5-bd02-4369-b594-ee71c4c83509-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.101438 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" event={"ID":"2f2f19e5-bd02-4369-b594-ee71c4c83509","Type":"ContainerDied","Data":"fc72537958f366b138ef658a9cdcbc51c2741c18d480f817ed47f983f2330f4e"} Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.101554 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.134571 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.140711 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hqt9z"] Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.718199 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" path="/var/lib/kubelet/pods/2f2f19e5-bd02-4369-b594-ee71c4c83509/volumes" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.761838 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-hqt9z" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Sep 30 17:20:02 crc kubenswrapper[4821]: I0930 17:20:02.971253 4821 scope.go:117] "RemoveContainer" containerID="871af5db87d95feed97bfc2a989ae523a35a5da449842e27f099ae1a4c36fa3c" Sep 30 17:20:02 crc kubenswrapper[4821]: E0930 17:20:02.975805 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Sep 30 17:20:02 crc kubenswrapper[4821]: E0930 17:20:02.976015 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhnz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kr849_openstack(9aa40c0f-e07d-43de-92d6-60ba8d6b668d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:20:02 crc kubenswrapper[4821]: E0930 17:20:02.977298 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kr849" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.149717 4821 generic.go:334] "Generic (PLEG): container finished" podID="5a2b06c3-690a-469e-bdf6-5033d9be88e8" containerID="34617b1a24cead19921d6337ba964022401b56a3e645b303d906029c2264eb59" exitCode=0 Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.149991 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjbr2" event={"ID":"5a2b06c3-690a-469e-bdf6-5033d9be88e8","Type":"ContainerDied","Data":"34617b1a24cead19921d6337ba964022401b56a3e645b303d906029c2264eb59"} Sep 30 17:20:03 crc kubenswrapper[4821]: E0930 17:20:03.182356 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kr849" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.182435 4821 scope.go:117] "RemoveContainer" containerID="dd8bf1837b14ce49342b508228007f30436900c3b57e32f16d73b17d9b8f629a" Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.291465 4821 scope.go:117] "RemoveContainer" containerID="02d11ea930c4e42ecd9dc405f4a06a52a1a0d9109c326a6f197349fc275fa6ae" Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.648218 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.673243 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78b9594fb8-nw9qj"] Sep 30 17:20:03 crc kubenswrapper[4821]: W0930 17:20:03.675050 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be55b7f_8f57_44f9_899b_d8e6676e5e02.slice/crio-10e7ecb265d478fae446e0838ccba981eae97057eb93c40cb00056993b2de569 WatchSource:0}: Error finding container 10e7ecb265d478fae446e0838ccba981eae97057eb93c40cb00056993b2de569: Status 404 returned error can't find the container with id 10e7ecb265d478fae446e0838ccba981eae97057eb93c40cb00056993b2de569 Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.678489 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.768950 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j72gp"] Sep 30 17:20:03 crc kubenswrapper[4821]: I0930 17:20:03.952625 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:03 crc kubenswrapper[4821]: W0930 17:20:03.996613 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec1eff5b_04bb_4c4a_bcc6_88025f356922.slice/crio-bfe77e549597f77af7d9d0d555ff63c48903b2170db13272995a48966655fc2b WatchSource:0}: Error finding container bfe77e549597f77af7d9d0d555ff63c48903b2170db13272995a48966655fc2b: Status 404 returned error can't find the container with id bfe77e549597f77af7d9d0d555ff63c48903b2170db13272995a48966655fc2b Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.181833 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j72gp" event={"ID":"008af46f-5c9c-44f6-beb7-fa105649d52b","Type":"ContainerStarted","Data":"f07577e25c2be6b347b3b297f17e0f700f6be05a4076a0909bceb2f81a8369a3"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.187688 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mk2vd" event={"ID":"7b1226c0-ea59-4c57-9837-cafbb926f373","Type":"ContainerStarted","Data":"d38d2bb0cbe872d7cf48cbd1f17e8efbe212914c1257e9ec428fe8aa5fe2bfec"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.189238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerStarted","Data":"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.192624 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b9594fb8-nw9qj" event={"ID":"4be55b7f-8f57-44f9-899b-d8e6676e5e02","Type":"ContainerStarted","Data":"10e7ecb265d478fae446e0838ccba981eae97057eb93c40cb00056993b2de569"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.194514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerStarted","Data":"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.194541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerStarted","Data":"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.194677 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7698bcb95c-njjrf" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon-log" containerID="cri-o://9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba" gracePeriod=30 Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.194913 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7698bcb95c-njjrf" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon" containerID="cri-o://2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33" gracePeriod=30 Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.196705 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerStarted","Data":"7c37dc8b27e661049c5d6faf332281cf7026946f4dd4d3939011de12890a3ba9"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.199626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerStarted","Data":"1c6fc6187908386fb2830478c1c239f6c9373f459da346efb7bdb7089970b7b2"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.221819 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerStarted","Data":"bfe77e549597f77af7d9d0d555ff63c48903b2170db13272995a48966655fc2b"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.233558 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mk2vd" podStartSLOduration=3.210722452 podStartE2EDuration="27.233536372s" podCreationTimestamp="2025-09-30 17:19:37 +0000 UTC" firstStartedPulling="2025-09-30 17:19:38.906669086 +0000 UTC m=+974.811715030" lastFinishedPulling="2025-09-30 17:20:02.929483006 +0000 UTC m=+998.834528950" observedRunningTime="2025-09-30 17:20:04.214446806 +0000 UTC m=+1000.119492750" watchObservedRunningTime="2025-09-30 17:20:04.233536372 +0000 UTC m=+1000.138582326" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.233906 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerStarted","Data":"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.233951 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerStarted","Data":"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6"} Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.234144 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7ddb7997dc-cnx5j" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon-log" containerID="cri-o://a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6" gracePeriod=30 Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.234277 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7ddb7997dc-cnx5j" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon" containerID="cri-o://44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17" gracePeriod=30 Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.268469 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7698bcb95c-njjrf" podStartSLOduration=4.179085244 podStartE2EDuration="28.26844448s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="2025-09-30 17:19:38.839974256 +0000 UTC m=+974.745020200" lastFinishedPulling="2025-09-30 17:20:02.929333492 +0000 UTC m=+998.834379436" observedRunningTime="2025-09-30 17:20:04.241606602 +0000 UTC m=+1000.146652546" watchObservedRunningTime="2025-09-30 17:20:04.26844448 +0000 UTC m=+1000.173490434" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.323576 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7ddb7997dc-cnx5j" podStartSLOduration=4.155654261 podStartE2EDuration="28.323559701s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="2025-09-30 17:19:38.848046167 +0000 UTC m=+974.753092111" lastFinishedPulling="2025-09-30 17:20:03.015951597 +0000 UTC m=+998.920997551" observedRunningTime="2025-09-30 17:20:04.261338414 +0000 UTC m=+1000.166384358" watchObservedRunningTime="2025-09-30 17:20:04.323559701 +0000 UTC m=+1000.228605645" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.576221 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.740058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8j4x\" (UniqueName: \"kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x\") pod \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.740570 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config\") pod \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.740658 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle\") pod \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\" (UID: \"5a2b06c3-690a-469e-bdf6-5033d9be88e8\") " Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.763285 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x" (OuterVolumeSpecName: "kube-api-access-v8j4x") pod "5a2b06c3-690a-469e-bdf6-5033d9be88e8" (UID: "5a2b06c3-690a-469e-bdf6-5033d9be88e8"). InnerVolumeSpecName "kube-api-access-v8j4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.848015 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8j4x\" (UniqueName: \"kubernetes.io/projected/5a2b06c3-690a-469e-bdf6-5033d9be88e8-kube-api-access-v8j4x\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.862458 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a2b06c3-690a-469e-bdf6-5033d9be88e8" (UID: "5a2b06c3-690a-469e-bdf6-5033d9be88e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.879387 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config" (OuterVolumeSpecName: "config") pod "5a2b06c3-690a-469e-bdf6-5033d9be88e8" (UID: "5a2b06c3-690a-469e-bdf6-5033d9be88e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.953543 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:04 crc kubenswrapper[4821]: I0930 17:20:04.953580 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2b06c3-690a-469e-bdf6-5033d9be88e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.278109 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerStarted","Data":"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.278323 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerStarted","Data":"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.282359 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jjbr2" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.282381 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jjbr2" event={"ID":"5a2b06c3-690a-469e-bdf6-5033d9be88e8","Type":"ContainerDied","Data":"4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.282406 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab08e2f61cc2d5276e13b02edb98f8939c0a7dde106f66dfd7b06da5420ceb1" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.297287 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerStarted","Data":"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.301174 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9887b9bbf-xcxhr" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon-log" containerID="cri-o://753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" gracePeriod=30 Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.301621 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9887b9bbf-xcxhr" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon" containerID="cri-o://454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" gracePeriod=30 Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.302257 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b974b45dd-mbzvm" podStartSLOduration=17.302242631 podStartE2EDuration="17.302242631s" podCreationTimestamp="2025-09-30 17:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:05.297508424 +0000 UTC m=+1001.202554368" watchObservedRunningTime="2025-09-30 17:20:05.302242631 +0000 UTC m=+1001.207288575" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.328662 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerStarted","Data":"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.356128 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9887b9bbf-xcxhr" podStartSLOduration=3.863166035 podStartE2EDuration="26.356111861s" podCreationTimestamp="2025-09-30 17:19:39 +0000 UTC" firstStartedPulling="2025-09-30 17:19:40.527875093 +0000 UTC m=+976.432921037" lastFinishedPulling="2025-09-30 17:20:03.020820919 +0000 UTC m=+998.925866863" observedRunningTime="2025-09-30 17:20:05.349856726 +0000 UTC m=+1001.254902670" watchObservedRunningTime="2025-09-30 17:20:05.356111861 +0000 UTC m=+1001.261157805" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.359374 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b9594fb8-nw9qj" event={"ID":"4be55b7f-8f57-44f9-899b-d8e6676e5e02","Type":"ContainerStarted","Data":"baa08f14752932307f976f354e58ca9497eb909b8cfff897ded653c131c1cde3"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.359411 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78b9594fb8-nw9qj" event={"ID":"4be55b7f-8f57-44f9-899b-d8e6676e5e02","Type":"ContainerStarted","Data":"560528fc16581d7b37fd02e67d8f69c4c2b1c6745f598f7f3b2f6c8a83d6f87e"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.398971 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j72gp" event={"ID":"008af46f-5c9c-44f6-beb7-fa105649d52b","Type":"ContainerStarted","Data":"00ff686225dc4b40c1ffbd91cceca8a73995eee175c3caeb4e2fada4e36e43e3"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.421929 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78b9594fb8-nw9qj" podStartSLOduration=17.421910358 podStartE2EDuration="17.421910358s" podCreationTimestamp="2025-09-30 17:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:05.410278159 +0000 UTC m=+1001.315324103" watchObservedRunningTime="2025-09-30 17:20:05.421910358 +0000 UTC m=+1001.326956312" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.426986 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerStarted","Data":"3f4bc4ac80f863eb8de9f191bd6089e6dd560007c269a24bf3032547b5f49fe7"} Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.453825 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j72gp" podStartSLOduration=17.453807002 podStartE2EDuration="17.453807002s" podCreationTimestamp="2025-09-30 17:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:05.436871501 +0000 UTC m=+1001.341917445" watchObservedRunningTime="2025-09-30 17:20:05.453807002 +0000 UTC m=+1001.358852946" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.491455 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:05 crc kubenswrapper[4821]: E0930 17:20:05.491831 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.491849 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" Sep 30 17:20:05 crc kubenswrapper[4821]: E0930 17:20:05.491866 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="init" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.491873 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="init" Sep 30 17:20:05 crc kubenswrapper[4821]: E0930 17:20:05.491887 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2b06c3-690a-469e-bdf6-5033d9be88e8" containerName="neutron-db-sync" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.491893 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2b06c3-690a-469e-bdf6-5033d9be88e8" containerName="neutron-db-sync" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.492054 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2f19e5-bd02-4369-b594-ee71c4c83509" containerName="dnsmasq-dns" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.492109 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2b06c3-690a-469e-bdf6-5033d9be88e8" containerName="neutron-db-sync" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.492889 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.506222 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.584605 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.584654 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.587170 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.587409 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrvl\" (UniqueName: \"kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.587551 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.646350 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.647902 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.653701 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.661717 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n62sk" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.662001 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.668810 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.677584 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.689915 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.690160 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.690176 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.690213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.690273 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hrvl\" (UniqueName: \"kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.692895 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.693692 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.693809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.696195 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.730676 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hrvl\" (UniqueName: \"kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl\") pod \"dnsmasq-dns-7b946d459c-vnhhm\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.794099 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.794337 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.794438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfhm\" (UniqueName: \"kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.794534 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.794608 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.860411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.895619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.896105 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfhm\" (UniqueName: \"kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.896169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.896193 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.896278 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.914010 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.914523 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.917806 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.923335 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:05 crc kubenswrapper[4821]: I0930 17:20:05.979111 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfhm\" (UniqueName: \"kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm\") pod \"neutron-687c4d45cb-97qzc\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.078099 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.480906 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerStarted","Data":"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a"} Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.481705 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-log" containerID="cri-o://6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" gracePeriod=30 Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.482202 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-httpd" containerID="cri-o://a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" gracePeriod=30 Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.517771 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-log" containerID="cri-o://3f4bc4ac80f863eb8de9f191bd6089e6dd560007c269a24bf3032547b5f49fe7" gracePeriod=30 Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.518261 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerStarted","Data":"e14da7b3f4ed50d41530ff4bd6262773eeb827442e4f050beaae3f9a7c70102a"} Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.522431 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-httpd" containerID="cri-o://e14da7b3f4ed50d41530ff4bd6262773eeb827442e4f050beaae3f9a7c70102a" gracePeriod=30 Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.536698 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.536675285 podStartE2EDuration="23.536675285s" podCreationTimestamp="2025-09-30 17:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:06.516482182 +0000 UTC m=+1002.421528126" watchObservedRunningTime="2025-09-30 17:20:06.536675285 +0000 UTC m=+1002.441721229" Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.568991 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=23.568969589 podStartE2EDuration="23.568969589s" podCreationTimestamp="2025-09-30 17:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:06.557422921 +0000 UTC m=+1002.462468865" watchObservedRunningTime="2025-09-30 17:20:06.568969589 +0000 UTC m=+1002.474015533" Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.636758 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:06 crc kubenswrapper[4821]: I0930 17:20:06.968389 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:07 crc kubenswrapper[4821]: W0930 17:20:07.031382 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a WatchSource:0}: Error finding container c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a: Status 404 returned error can't find the container with id c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.079204 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.367629 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.455357 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.555935 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.557826 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.558049 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.561374 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts" (OuterVolumeSpecName: "scripts") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.561961 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqpfr\" (UniqueName: \"kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.562566 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.565394 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.565573 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\" (UID: \"ec1eff5b-04bb-4c4a-bcc6-88025f356922\") " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.565346 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs" (OuterVolumeSpecName: "logs") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.566480 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.566587 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.569351 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.573271 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerStarted","Data":"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.573310 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerStarted","Data":"c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.574218 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr" (OuterVolumeSpecName: "kube-api-access-dqpfr") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "kube-api-access-dqpfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.587073 4821 generic.go:334] "Generic (PLEG): container finished" podID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerID="e14da7b3f4ed50d41530ff4bd6262773eeb827442e4f050beaae3f9a7c70102a" exitCode=143 Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.587428 4821 generic.go:334] "Generic (PLEG): container finished" podID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerID="3f4bc4ac80f863eb8de9f191bd6089e6dd560007c269a24bf3032547b5f49fe7" exitCode=143 Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.587358 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerDied","Data":"e14da7b3f4ed50d41530ff4bd6262773eeb827442e4f050beaae3f9a7c70102a"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.587891 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerDied","Data":"3f4bc4ac80f863eb8de9f191bd6089e6dd560007c269a24bf3032547b5f49fe7"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.604761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerStarted","Data":"22e70be9abf5816120b58f092584fd10bc6ac6322e413fe738018bb231000aed"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.604804 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerStarted","Data":"0887568f576da04fc4ccca20e9ceb88c0879f51fbda4513aabf11bcde07bfd77"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618386 4821 generic.go:334] "Generic (PLEG): container finished" podID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerID="a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" exitCode=143 Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618417 4821 generic.go:334] "Generic (PLEG): container finished" podID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerID="6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" exitCode=143 Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618437 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerDied","Data":"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618463 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerDied","Data":"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618480 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec1eff5b-04bb-4c4a-bcc6-88025f356922","Type":"ContainerDied","Data":"bfe77e549597f77af7d9d0d555ff63c48903b2170db13272995a48966655fc2b"} Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618498 4821 scope.go:117] "RemoveContainer" containerID="a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.618614 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.623473 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.665500 4821 scope.go:117] "RemoveContainer" containerID="6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.667856 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.667883 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.667893 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqpfr\" (UniqueName: \"kubernetes.io/projected/ec1eff5b-04bb-4c4a-bcc6-88025f356922-kube-api-access-dqpfr\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.667902 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.667911 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1eff5b-04bb-4c4a-bcc6-88025f356922-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.668977 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data" (OuterVolumeSpecName: "config-data") pod "ec1eff5b-04bb-4c4a-bcc6-88025f356922" (UID: "ec1eff5b-04bb-4c4a-bcc6-88025f356922"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.714895 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.770688 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.770714 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1eff5b-04bb-4c4a-bcc6-88025f356922-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.804229 4821 scope.go:117] "RemoveContainer" containerID="a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" Sep 30 17:20:07 crc kubenswrapper[4821]: E0930 17:20:07.805680 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a\": container with ID starting with a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a not found: ID does not exist" containerID="a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.805710 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a"} err="failed to get container status \"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a\": rpc error: code = NotFound desc = could not find container \"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a\": container with ID starting with a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a not found: ID does not exist" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.805729 4821 scope.go:117] "RemoveContainer" containerID="6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" Sep 30 17:20:07 crc kubenswrapper[4821]: E0930 17:20:07.811248 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62\": container with ID starting with 6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62 not found: ID does not exist" containerID="6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.811294 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62"} err="failed to get container status \"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62\": rpc error: code = NotFound desc = could not find container \"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62\": container with ID starting with 6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62 not found: ID does not exist" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.811322 4821 scope.go:117] "RemoveContainer" containerID="a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.813252 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a"} err="failed to get container status \"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a\": rpc error: code = NotFound desc = could not find container \"a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a\": container with ID starting with a251be3e4c5da8d1e32ab722689cdbc2ef27d039649a568828c6ec5d9fe1e31a not found: ID does not exist" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.813290 4821 scope.go:117] "RemoveContainer" containerID="6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.813617 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62"} err="failed to get container status \"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62\": rpc error: code = NotFound desc = could not find container \"6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62\": container with ID starting with 6cabf17e74f21133ff20d5b0418d39ed59c7027ac33851a71372bd912fcabd62 not found: ID does not exist" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.958807 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.961523 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.989927 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:07 crc kubenswrapper[4821]: E0930 17:20:07.990282 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-log" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.990293 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-log" Sep 30 17:20:07 crc kubenswrapper[4821]: E0930 17:20:07.990325 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-httpd" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.990330 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-httpd" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.990491 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-httpd" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.990506 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" containerName="glance-log" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.991337 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.994532 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:20:07 crc kubenswrapper[4821]: I0930 17:20:07.994653 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.029027 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202510 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202563 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202589 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202606 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202633 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202668 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202711 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.202750 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304525 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304567 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304608 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304627 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304646 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304686 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304728 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.304790 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.313378 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.318383 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.318637 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.347731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.348271 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.356236 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.357327 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.359711 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.361377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.448349 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.605774 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.615652 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616019 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxvpg\" (UniqueName: \"kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616237 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616387 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616491 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.616670 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data\") pod \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\" (UID: \"0b9b0ef8-b409-45b6-924a-d28b0584cca0\") " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.644147 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs" (OuterVolumeSpecName: "logs") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.644373 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.653341 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg" (OuterVolumeSpecName: "kube-api-access-sxvpg") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "kube-api-access-sxvpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.653476 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts" (OuterVolumeSpecName: "scripts") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.671127 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.704048 4821 generic.go:334] "Generic (PLEG): container finished" podID="7b1226c0-ea59-4c57-9837-cafbb926f373" containerID="d38d2bb0cbe872d7cf48cbd1f17e8efbe212914c1257e9ec428fe8aa5fe2bfec" exitCode=0 Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.704135 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mk2vd" event={"ID":"7b1226c0-ea59-4c57-9837-cafbb926f373","Type":"ContainerDied","Data":"d38d2bb0cbe872d7cf48cbd1f17e8efbe212914c1257e9ec428fe8aa5fe2bfec"} Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.734423 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.734450 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.734459 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b9b0ef8-b409-45b6-924a-d28b0584cca0-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.734467 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.734475 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxvpg\" (UniqueName: \"kubernetes.io/projected/0b9b0ef8-b409-45b6-924a-d28b0584cca0-kube-api-access-sxvpg\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.757213 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1eff5b-04bb-4c4a-bcc6-88025f356922" path="/var/lib/kubelet/pods/ec1eff5b-04bb-4c4a-bcc6-88025f356922/volumes" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.774555 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data" (OuterVolumeSpecName: "config-data") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.779004 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.793399 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-687c4d45cb-97qzc" podStartSLOduration=3.793382912 podStartE2EDuration="3.793382912s" podCreationTimestamp="2025-09-30 17:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:08.790633974 +0000 UTC m=+1004.695679918" watchObservedRunningTime="2025-09-30 17:20:08.793382912 +0000 UTC m=+1004.698428856" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.801398 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.811794 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b9b0ef8-b409-45b6-924a-d28b0584cca0" (UID: "0b9b0ef8-b409-45b6-924a-d28b0584cca0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.835912 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.835945 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b9b0ef8-b409-45b6-924a-d28b0584cca0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.835957 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.843610 4821 generic.go:334] "Generic (PLEG): container finished" podID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerID="22e70be9abf5816120b58f092584fd10bc6ac6322e413fe738018bb231000aed" exitCode=0 Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855605 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855642 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerStarted","Data":"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915"} Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855664 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b9b0ef8-b409-45b6-924a-d28b0584cca0","Type":"ContainerDied","Data":"7c37dc8b27e661049c5d6faf332281cf7026946f4dd4d3939011de12890a3ba9"} Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855683 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerDied","Data":"22e70be9abf5816120b58f092584fd10bc6ac6322e413fe738018bb231000aed"} Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855694 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerStarted","Data":"8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377"} Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.855710 4821 scope.go:117] "RemoveContainer" containerID="e14da7b3f4ed50d41530ff4bd6262773eeb827442e4f050beaae3f9a7c70102a" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.890940 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" podStartSLOduration=3.89092584 podStartE2EDuration="3.89092584s" podCreationTimestamp="2025-09-30 17:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:08.890175872 +0000 UTC m=+1004.795221816" watchObservedRunningTime="2025-09-30 17:20:08.89092584 +0000 UTC m=+1004.795971784" Sep 30 17:20:08 crc kubenswrapper[4821]: I0930 17:20:08.921301 4821 scope.go:117] "RemoveContainer" containerID="3f4bc4ac80f863eb8de9f191bd6089e6dd560007c269a24bf3032547b5f49fe7" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.109141 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.109489 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.133840 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.142966 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.159594 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:09 crc kubenswrapper[4821]: E0930 17:20:09.159962 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-httpd" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.159978 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-httpd" Sep 30 17:20:09 crc kubenswrapper[4821]: E0930 17:20:09.159991 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-log" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.159998 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-log" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.160177 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-log" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.160203 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" containerName="glance-httpd" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.161154 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.169258 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.169450 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.180856 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.246947 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.247794 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248036 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248160 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248258 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248583 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248685 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248757 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2c4x\" (UniqueName: \"kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248848 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.248931 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.342536 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5785886597-f9l4l"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.344048 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351201 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351425 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351607 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351711 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351823 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.351974 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.352076 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.352155 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2c4x\" (UniqueName: \"kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.352324 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.352519 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.362250 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.364613 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.364776 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.364804 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.377427 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.377906 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.379033 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5785886597-f9l4l"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.395940 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2c4x\" (UniqueName: \"kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.417817 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.445794 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.453926 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-ovndb-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454195 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-httpd-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454367 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-public-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-combined-ca-bundle\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454531 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sdhk\" (UniqueName: \"kubernetes.io/projected/f955b5ec-85cf-43fc-9a7c-8f20a510b015-kube-api-access-7sdhk\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454592 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.454690 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-internal-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.484274 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556444 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sdhk\" (UniqueName: \"kubernetes.io/projected/f955b5ec-85cf-43fc-9a7c-8f20a510b015-kube-api-access-7sdhk\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556533 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-internal-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556564 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-ovndb-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556608 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-httpd-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556655 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-public-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.556677 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-combined-ca-bundle\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.562230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.563768 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-combined-ca-bundle\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.564611 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-internal-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.568530 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-ovndb-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.569489 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-httpd-config\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.570242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f955b5ec-85cf-43fc-9a7c-8f20a510b015-public-tls-certs\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.590881 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sdhk\" (UniqueName: \"kubernetes.io/projected/f955b5ec-85cf-43fc-9a7c-8f20a510b015-kube-api-access-7sdhk\") pod \"neutron-5785886597-f9l4l\" (UID: \"f955b5ec-85cf-43fc-9a7c-8f20a510b015\") " pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.771952 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.810340 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.867182 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerStarted","Data":"ba7b80c17bfa38a6f1c9e306c5fcc0e64764060ed8e6c795505b0cf8e8ec967c"} Sep 30 17:20:09 crc kubenswrapper[4821]: I0930 17:20:09.867218 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.189616 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.501652 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mk2vd" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.602933 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts\") pod \"7b1226c0-ea59-4c57-9837-cafbb926f373\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.603459 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xclkq\" (UniqueName: \"kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq\") pod \"7b1226c0-ea59-4c57-9837-cafbb926f373\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.603529 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data\") pod \"7b1226c0-ea59-4c57-9837-cafbb926f373\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.603660 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs\") pod \"7b1226c0-ea59-4c57-9837-cafbb926f373\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.603704 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle\") pod \"7b1226c0-ea59-4c57-9837-cafbb926f373\" (UID: \"7b1226c0-ea59-4c57-9837-cafbb926f373\") " Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.605456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs" (OuterVolumeSpecName: "logs") pod "7b1226c0-ea59-4c57-9837-cafbb926f373" (UID: "7b1226c0-ea59-4c57-9837-cafbb926f373"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.615545 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts" (OuterVolumeSpecName: "scripts") pod "7b1226c0-ea59-4c57-9837-cafbb926f373" (UID: "7b1226c0-ea59-4c57-9837-cafbb926f373"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.616352 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq" (OuterVolumeSpecName: "kube-api-access-xclkq") pod "7b1226c0-ea59-4c57-9837-cafbb926f373" (UID: "7b1226c0-ea59-4c57-9837-cafbb926f373"). InnerVolumeSpecName "kube-api-access-xclkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.649981 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b1226c0-ea59-4c57-9837-cafbb926f373" (UID: "7b1226c0-ea59-4c57-9837-cafbb926f373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.654950 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5785886597-f9l4l"] Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.690540 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data" (OuterVolumeSpecName: "config-data") pod "7b1226c0-ea59-4c57-9837-cafbb926f373" (UID: "7b1226c0-ea59-4c57-9837-cafbb926f373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.705181 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b1226c0-ea59-4c57-9837-cafbb926f373-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.705216 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.705225 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.705233 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xclkq\" (UniqueName: \"kubernetes.io/projected/7b1226c0-ea59-4c57-9837-cafbb926f373-kube-api-access-xclkq\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.705242 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1226c0-ea59-4c57-9837-cafbb926f373-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.753673 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9b0ef8-b409-45b6-924a-d28b0584cca0" path="/var/lib/kubelet/pods/0b9b0ef8-b409-45b6-924a-d28b0584cca0/volumes" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.911471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5785886597-f9l4l" event={"ID":"f955b5ec-85cf-43fc-9a7c-8f20a510b015","Type":"ContainerStarted","Data":"49127fcb8994ea8031359a7657a4308a455c4de3a7ccdfdf1106a0be7352e9aa"} Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.930451 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d47786ff6-8tnsh"] Sep 30 17:20:10 crc kubenswrapper[4821]: E0930 17:20:10.930788 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1226c0-ea59-4c57-9837-cafbb926f373" containerName="placement-db-sync" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.930798 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1226c0-ea59-4c57-9837-cafbb926f373" containerName="placement-db-sync" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.930957 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1226c0-ea59-4c57-9837-cafbb926f373" containerName="placement-db-sync" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.931887 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.934415 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.935603 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerStarted","Data":"6e68dd3c1e24676ff4f421ee3659a4fd13f3e3d813f2d2d6fd8b1338ba6b4488"} Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.935766 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.946457 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mk2vd" event={"ID":"7b1226c0-ea59-4c57-9837-cafbb926f373","Type":"ContainerDied","Data":"08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd"} Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.946508 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f562a91593c33ab35c9fe58dd7d4b8f4b93535ea2e73703953d92ee3b758dd" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.946579 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mk2vd" Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.950270 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerStarted","Data":"9f2848a58dc3ddee4ee6d82cd5cbdd897e52e9c7bd7ed2b0eb58088af14898ae"} Sep 30 17:20:10 crc kubenswrapper[4821]: I0930 17:20:10.957560 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d47786ff6-8tnsh"] Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013527 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-public-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013658 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-config-data\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013684 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbd5w\" (UniqueName: \"kubernetes.io/projected/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-kube-api-access-nbd5w\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013761 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-scripts\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013799 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-logs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.013834 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-combined-ca-bundle\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.014005 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-internal-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.116000 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-scripts\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.120950 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-logs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.121073 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-combined-ca-bundle\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.121367 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-internal-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.121473 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-public-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.121592 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-config-data\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.121676 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbd5w\" (UniqueName: \"kubernetes.io/projected/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-kube-api-access-nbd5w\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.128947 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-scripts\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.130724 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-logs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.170625 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-public-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.171866 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-config-data\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.172592 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-combined-ca-bundle\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.172743 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-internal-tls-certs\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.174955 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbd5w\" (UniqueName: \"kubernetes.io/projected/dd99f742-0ed7-42e3-92f6-5af6acdf92d9-kube-api-access-nbd5w\") pod \"placement-5d47786ff6-8tnsh\" (UID: \"dd99f742-0ed7-42e3-92f6-5af6acdf92d9\") " pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.258462 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:11 crc kubenswrapper[4821]: I0930 17:20:11.952951 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d47786ff6-8tnsh"] Sep 30 17:20:12 crc kubenswrapper[4821]: I0930 17:20:12.009408 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5785886597-f9l4l" event={"ID":"f955b5ec-85cf-43fc-9a7c-8f20a510b015","Type":"ContainerStarted","Data":"9f9885f46c480c256c70ec4dbc05d4101491ea54312b694a8c874bb853909f71"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.040157 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5785886597-f9l4l" event={"ID":"f955b5ec-85cf-43fc-9a7c-8f20a510b015","Type":"ContainerStarted","Data":"2b0c6f5ec9b9940d8e987c5ff66390c5a852a4ca0ec12f757c23e67c75f2b73e"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.040946 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.042465 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerStarted","Data":"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.051824 4821 generic.go:334] "Generic (PLEG): container finished" podID="008af46f-5c9c-44f6-beb7-fa105649d52b" containerID="00ff686225dc4b40c1ffbd91cceca8a73995eee175c3caeb4e2fada4e36e43e3" exitCode=0 Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.051920 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j72gp" event={"ID":"008af46f-5c9c-44f6-beb7-fa105649d52b","Type":"ContainerDied","Data":"00ff686225dc4b40c1ffbd91cceca8a73995eee175c3caeb4e2fada4e36e43e3"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.053744 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerStarted","Data":"43450ab0975b7cc23578e8fba6e9566b3e95ad2ee96642ae6c54420a7d21d974"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.055075 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d47786ff6-8tnsh" event={"ID":"dd99f742-0ed7-42e3-92f6-5af6acdf92d9","Type":"ContainerStarted","Data":"c2203c2443a24c1f44e8c9924e0c7a23c9e8ca9dd1902dada9cb3c556fc2c3be"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.055114 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d47786ff6-8tnsh" event={"ID":"dd99f742-0ed7-42e3-92f6-5af6acdf92d9","Type":"ContainerStarted","Data":"2a89ac0b3b41282a1c86c27b6621532fbf841f6cd70ff589f46322e9364082d2"} Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.067879 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5785886597-f9l4l" podStartSLOduration=4.067863094 podStartE2EDuration="4.067863094s" podCreationTimestamp="2025-09-30 17:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:13.061336222 +0000 UTC m=+1008.966382166" watchObservedRunningTime="2025-09-30 17:20:13.067863094 +0000 UTC m=+1008.972909038" Sep 30 17:20:13 crc kubenswrapper[4821]: I0930 17:20:13.124277 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.124255788 podStartE2EDuration="6.124255788s" podCreationTimestamp="2025-09-30 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:13.121418357 +0000 UTC m=+1009.026464301" watchObservedRunningTime="2025-09-30 17:20:13.124255788 +0000 UTC m=+1009.029301732" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.064801 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerStarted","Data":"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81"} Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.066916 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d47786ff6-8tnsh" event={"ID":"dd99f742-0ed7-42e3-92f6-5af6acdf92d9","Type":"ContainerStarted","Data":"2b058ee8088b9688c3e8e52a05fbd0c32bd11fcd13a7db3b578763538efd7982"} Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.093268 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.093249487 podStartE2EDuration="5.093249487s" podCreationTimestamp="2025-09-30 17:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:14.087157954 +0000 UTC m=+1009.992203898" watchObservedRunningTime="2025-09-30 17:20:14.093249487 +0000 UTC m=+1009.998295431" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.114365 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5d47786ff6-8tnsh" podStartSLOduration=4.114346372 podStartE2EDuration="4.114346372s" podCreationTimestamp="2025-09-30 17:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:14.111934261 +0000 UTC m=+1010.016980205" watchObservedRunningTime="2025-09-30 17:20:14.114346372 +0000 UTC m=+1010.019392316" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.466953 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.506647 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.506731 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.506784 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.506833 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.506854 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgcvq\" (UniqueName: \"kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.507014 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys\") pod \"008af46f-5c9c-44f6-beb7-fa105649d52b\" (UID: \"008af46f-5c9c-44f6-beb7-fa105649d52b\") " Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.523408 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq" (OuterVolumeSpecName: "kube-api-access-tgcvq") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "kube-api-access-tgcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.524437 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts" (OuterVolumeSpecName: "scripts") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.527379 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.529258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.604526 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.605309 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data" (OuterVolumeSpecName: "config-data") pod "008af46f-5c9c-44f6-beb7-fa105649d52b" (UID: "008af46f-5c9c-44f6-beb7-fa105649d52b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611553 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611613 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611625 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611638 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611663 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008af46f-5c9c-44f6-beb7-fa105649d52b-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:14 crc kubenswrapper[4821]: I0930 17:20:14.611671 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgcvq\" (UniqueName: \"kubernetes.io/projected/008af46f-5c9c-44f6-beb7-fa105649d52b-kube-api-access-tgcvq\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.076445 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j72gp" event={"ID":"008af46f-5c9c-44f6-beb7-fa105649d52b","Type":"ContainerDied","Data":"f07577e25c2be6b347b3b297f17e0f700f6be05a4076a0909bceb2f81a8369a3"} Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.076775 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f07577e25c2be6b347b3b297f17e0f700f6be05a4076a0909bceb2f81a8369a3" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.076500 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j72gp" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.076966 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.077027 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.217325 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-dcc94dcb7-4dt65"] Sep 30 17:20:15 crc kubenswrapper[4821]: E0930 17:20:15.222320 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008af46f-5c9c-44f6-beb7-fa105649d52b" containerName="keystone-bootstrap" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.222349 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="008af46f-5c9c-44f6-beb7-fa105649d52b" containerName="keystone-bootstrap" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.222574 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="008af46f-5c9c-44f6-beb7-fa105649d52b" containerName="keystone-bootstrap" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.223136 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.228507 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.228545 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.228821 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.228983 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.229138 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.233540 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkq42" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.242968 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dcc94dcb7-4dt65"] Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.325903 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-public-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.325977 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-credential-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326035 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-config-data\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326055 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-internal-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326107 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9qg\" (UniqueName: \"kubernetes.io/projected/2463ed19-463b-4138-ba45-0890d3173e94-kube-api-access-4f9qg\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326196 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-combined-ca-bundle\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326219 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-scripts\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.326250 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-fernet-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.427958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-combined-ca-bundle\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-scripts\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428046 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-fernet-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428122 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-public-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-credential-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428191 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-config-data\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-internal-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.428242 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9qg\" (UniqueName: \"kubernetes.io/projected/2463ed19-463b-4138-ba45-0890d3173e94-kube-api-access-4f9qg\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.434618 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-combined-ca-bundle\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.434653 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-credential-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.435532 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-fernet-keys\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.435706 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-scripts\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.436572 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-public-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.438612 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-internal-tls-certs\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.446868 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2463ed19-463b-4138-ba45-0890d3173e94-config-data\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.448715 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9qg\" (UniqueName: \"kubernetes.io/projected/2463ed19-463b-4138-ba45-0890d3173e94-kube-api-access-4f9qg\") pod \"keystone-dcc94dcb7-4dt65\" (UID: \"2463ed19-463b-4138-ba45-0890d3173e94\") " pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.546072 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:15 crc kubenswrapper[4821]: I0930 17:20:15.862268 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.000019 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.000319 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="dnsmasq-dns" containerID="cri-o://05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0" gracePeriod=10 Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.461257 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dcc94dcb7-4dt65"] Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.620717 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.778295 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm8lq\" (UniqueName: \"kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.779149 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.779339 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.779412 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.779455 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.787523 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq" (OuterVolumeSpecName: "kube-api-access-tm8lq") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158"). InnerVolumeSpecName "kube-api-access-tm8lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.848311 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.857770 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:16 crc kubenswrapper[4821]: E0930 17:20:16.867441 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config podName:ca7b68c4-bbb5-4b88-aab9-b246a1c8a158 nodeName:}" failed. No retries permitted until 2025-09-30 17:20:17.367408449 +0000 UTC m=+1013.272454393 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158") : error deleting /var/lib/kubelet/pods/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158/volume-subpaths: remove /var/lib/kubelet/pods/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158/volume-subpaths: no such file or directory Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.867615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.881345 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.881373 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.881384 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:16 crc kubenswrapper[4821]: I0930 17:20:16.881395 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm8lq\" (UniqueName: \"kubernetes.io/projected/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-kube-api-access-tm8lq\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.105994 4821 generic.go:334] "Generic (PLEG): container finished" podID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerID="05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0" exitCode=0 Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.106064 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.106090 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" event={"ID":"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158","Type":"ContainerDied","Data":"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0"} Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.106123 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-znw7f" event={"ID":"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158","Type":"ContainerDied","Data":"1b30f4e3f9dcc9e59b8680e9541c894b202d9c943d99410f2404b1784708ea4b"} Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.106144 4821 scope.go:117] "RemoveContainer" containerID="05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.107782 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dcc94dcb7-4dt65" event={"ID":"2463ed19-463b-4138-ba45-0890d3173e94","Type":"ContainerStarted","Data":"f41fa34526ab6c728169a39123aaf70c02a26eb7a7e89ec46afaab2476cdbbb8"} Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.128665 4821 scope.go:117] "RemoveContainer" containerID="fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.150751 4821 scope.go:117] "RemoveContainer" containerID="05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0" Sep 30 17:20:17 crc kubenswrapper[4821]: E0930 17:20:17.151488 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0\": container with ID starting with 05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0 not found: ID does not exist" containerID="05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.151518 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0"} err="failed to get container status \"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0\": rpc error: code = NotFound desc = could not find container \"05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0\": container with ID starting with 05ddd2bf05955f82ab8e5de6b5c709c03eb5697f31e6b92d3378fd9961d1cae0 not found: ID does not exist" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.151543 4821 scope.go:117] "RemoveContainer" containerID="fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b" Sep 30 17:20:17 crc kubenswrapper[4821]: E0930 17:20:17.152129 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b\": container with ID starting with fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b not found: ID does not exist" containerID="fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.152154 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b"} err="failed to get container status \"fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b\": rpc error: code = NotFound desc = could not find container \"fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b\": container with ID starting with fb5a234ef2f4030874ea2ea28a97aafa2b4e15109afa0310d11783f057a65a5b not found: ID does not exist" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.389654 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") pod \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\" (UID: \"ca7b68c4-bbb5-4b88-aab9-b246a1c8a158\") " Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.390184 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config" (OuterVolumeSpecName: "config") pod "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" (UID: "ca7b68c4-bbb5-4b88-aab9-b246a1c8a158"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.433192 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.439685 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-znw7f"] Sep 30 17:20:17 crc kubenswrapper[4821]: I0930 17:20:17.491810 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.114625 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dcc94dcb7-4dt65" event={"ID":"2463ed19-463b-4138-ba45-0890d3173e94","Type":"ContainerStarted","Data":"20071f178dc43433258b7220b217841304f30a6d686309371052c02b1e266a0a"} Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.114963 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.117982 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kr849" event={"ID":"9aa40c0f-e07d-43de-92d6-60ba8d6b668d","Type":"ContainerStarted","Data":"a0cd1c0621763e1183f5cf6297cd3a609ca8904e9e3fdd0ad2401660d31688f2"} Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.136766 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-dcc94dcb7-4dt65" podStartSLOduration=3.136749751 podStartE2EDuration="3.136749751s" podCreationTimestamp="2025-09-30 17:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:18.131932691 +0000 UTC m=+1014.036978635" watchObservedRunningTime="2025-09-30 17:20:18.136749751 +0000 UTC m=+1014.041795695" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.159616 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kr849" podStartSLOduration=5.358733884 podStartE2EDuration="42.159600669s" podCreationTimestamp="2025-09-30 17:19:36 +0000 UTC" firstStartedPulling="2025-09-30 17:19:38.901033456 +0000 UTC m=+974.806079400" lastFinishedPulling="2025-09-30 17:20:15.701900241 +0000 UTC m=+1011.606946185" observedRunningTime="2025-09-30 17:20:18.159407005 +0000 UTC m=+1014.064452949" watchObservedRunningTime="2025-09-30 17:20:18.159600669 +0000 UTC m=+1014.064646613" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.606748 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.606800 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.661625 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.676244 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:18 crc kubenswrapper[4821]: I0930 17:20:18.718258 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" path="/var/lib/kubelet/pods/ca7b68c4-bbb5-4b88-aab9-b246a1c8a158/volumes" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.110507 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.126286 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.126329 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.248114 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78b9594fb8-nw9qj" podUID="4be55b7f-8f57-44f9-899b-d8e6676e5e02" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.486252 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.488119 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.529129 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:20:19 crc kubenswrapper[4821]: I0930 17:20:19.551153 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:20:20 crc kubenswrapper[4821]: I0930 17:20:20.132639 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:20:20 crc kubenswrapper[4821]: I0930 17:20:20.132685 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:20:21 crc kubenswrapper[4821]: I0930 17:20:21.145661 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:20:21 crc kubenswrapper[4821]: I0930 17:20:21.145940 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.653919 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.655218 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.669343 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.737904 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.738017 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:20:23 crc kubenswrapper[4821]: I0930 17:20:23.903438 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:20:25 crc kubenswrapper[4821]: I0930 17:20:25.172270 4821 generic.go:334] "Generic (PLEG): container finished" podID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" containerID="a0cd1c0621763e1183f5cf6297cd3a609ca8904e9e3fdd0ad2401660d31688f2" exitCode=0 Sep 30 17:20:25 crc kubenswrapper[4821]: I0930 17:20:25.172360 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kr849" event={"ID":"9aa40c0f-e07d-43de-92d6-60ba8d6b668d","Type":"ContainerDied","Data":"a0cd1c0621763e1183f5cf6297cd3a609ca8904e9e3fdd0ad2401660d31688f2"} Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.635298 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kr849" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.758415 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.759293 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhnz9\" (UniqueName: \"kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.759716 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.759796 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.759888 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.760113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts\") pod \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\" (UID: \"9aa40c0f-e07d-43de-92d6-60ba8d6b668d\") " Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.760162 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.760921 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.770288 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9" (OuterVolumeSpecName: "kube-api-access-qhnz9") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "kube-api-access-qhnz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.770306 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.781278 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts" (OuterVolumeSpecName: "scripts") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.821267 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.862024 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.862053 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhnz9\" (UniqueName: \"kubernetes.io/projected/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-kube-api-access-qhnz9\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.862063 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.862072 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.864664 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data" (OuterVolumeSpecName: "config-data") pod "9aa40c0f-e07d-43de-92d6-60ba8d6b668d" (UID: "9aa40c0f-e07d-43de-92d6-60ba8d6b668d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:26 crc kubenswrapper[4821]: I0930 17:20:26.964645 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa40c0f-e07d-43de-92d6-60ba8d6b668d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.190694 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kr849" event={"ID":"9aa40c0f-e07d-43de-92d6-60ba8d6b668d","Type":"ContainerDied","Data":"1e740e737eaa7f15b1e4d974ced66f37ae5c389c6e7007af16626705c6d01699"} Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.190740 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e740e737eaa7f15b1e4d974ced66f37ae5c389c6e7007af16626705c6d01699" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.190741 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kr849" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.508704 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:27 crc kubenswrapper[4821]: E0930 17:20:27.509131 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="init" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.509147 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="init" Sep 30 17:20:27 crc kubenswrapper[4821]: E0930 17:20:27.509177 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="dnsmasq-dns" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.509186 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="dnsmasq-dns" Sep 30 17:20:27 crc kubenswrapper[4821]: E0930 17:20:27.509198 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" containerName="cinder-db-sync" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.509205 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" containerName="cinder-db-sync" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.509431 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7b68c4-bbb5-4b88-aab9-b246a1c8a158" containerName="dnsmasq-dns" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.509454 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" containerName="cinder-db-sync" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.510598 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.518265 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.518497 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-44ldx" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.523605 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.523666 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.544466 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579603 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579705 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579736 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579762 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rv7s\" (UniqueName: \"kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579802 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.579844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.607644 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.609476 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.622133 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683017 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683070 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683277 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683351 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rv7s\" (UniqueName: \"kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683379 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683412 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fv66\" (UniqueName: \"kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683443 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683509 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683536 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.683571 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.699991 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.700617 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.706155 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.716742 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.730922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rv7s\" (UniqueName: \"kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s\") pod \"cinder-scheduler-0\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.784794 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.785104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.785158 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.785216 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.785373 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fv66\" (UniqueName: \"kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.786603 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.787046 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.787384 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.787731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.811238 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fv66\" (UniqueName: \"kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66\") pod \"dnsmasq-dns-f64d5748f-ckqhg\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.846466 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.847492 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.849741 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.855385 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.862809 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.891971 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gcbv\" (UniqueName: \"kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892174 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892285 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892363 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892398 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.892469 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.937601 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.994958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995068 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995622 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995658 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995689 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995712 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995731 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gcbv\" (UniqueName: \"kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.995953 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:27 crc kubenswrapper[4821]: I0930 17:20:27.996201 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.004760 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.004996 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.007411 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.007897 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.024435 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gcbv\" (UniqueName: \"kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv\") pod \"cinder-api-0\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.254916 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.406742 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.507002 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:20:28 crc kubenswrapper[4821]: I0930 17:20:28.776583 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.109503 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.215229 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerStarted","Data":"ab1b1af1819144ba0f7109b89ab752f887bbff9b91de3d6eb088259ee616e17c"} Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.216793 4821 generic.go:334] "Generic (PLEG): container finished" podID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerID="7c8adb45471709c246f83939b757fbd276c1ceecc352a8177d6284b63b156583" exitCode=0 Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.216834 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" event={"ID":"a45b832b-22cd-47fa-bd7f-83ad23d2d135","Type":"ContainerDied","Data":"7c8adb45471709c246f83939b757fbd276c1ceecc352a8177d6284b63b156583"} Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.216849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" event={"ID":"a45b832b-22cd-47fa-bd7f-83ad23d2d135","Type":"ContainerStarted","Data":"469eb22c07799a879c99efc2e62b9f2abd74f7cfdd1db82aa2777c21a4f44b55"} Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.225015 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerStarted","Data":"0b65a7de65d541fb68937534e005db9f328ced4a2d5f719101c77af0274bb121"} Sep 30 17:20:29 crc kubenswrapper[4821]: I0930 17:20:29.250175 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78b9594fb8-nw9qj" podUID="4be55b7f-8f57-44f9-899b-d8e6676e5e02" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Sep 30 17:20:30 crc kubenswrapper[4821]: I0930 17:20:30.239201 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" event={"ID":"a45b832b-22cd-47fa-bd7f-83ad23d2d135","Type":"ContainerStarted","Data":"f4496f07efe2e8406a06d98600d8f1f5ef294c6a272c36ee713b4899caae9b35"} Sep 30 17:20:30 crc kubenswrapper[4821]: I0930 17:20:30.239646 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:30 crc kubenswrapper[4821]: I0930 17:20:30.263578 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerStarted","Data":"1c24ceef7856b575ffd84a4eeac247e6454d7967a0e7976919e20257b42d15ce"} Sep 30 17:20:30 crc kubenswrapper[4821]: I0930 17:20:30.266999 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" podStartSLOduration=3.266979537 podStartE2EDuration="3.266979537s" podCreationTimestamp="2025-09-30 17:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:30.258890226 +0000 UTC m=+1026.163936170" watchObservedRunningTime="2025-09-30 17:20:30.266979537 +0000 UTC m=+1026.172025481" Sep 30 17:20:30 crc kubenswrapper[4821]: I0930 17:20:30.565574 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.274768 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerStarted","Data":"8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15"} Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.275006 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerStarted","Data":"fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359"} Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.279297 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api-log" containerID="cri-o://1c24ceef7856b575ffd84a4eeac247e6454d7967a0e7976919e20257b42d15ce" gracePeriod=30 Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.279516 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerStarted","Data":"ad1fb2e868d1f5b5a7dd362a2e669a7857d58cfff69e134bc0ae56f28a23eb7a"} Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.279557 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.279581 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api" containerID="cri-o://ad1fb2e868d1f5b5a7dd362a2e669a7857d58cfff69e134bc0ae56f28a23eb7a" gracePeriod=30 Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.306996 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.183941312 podStartE2EDuration="4.306977113s" podCreationTimestamp="2025-09-30 17:20:27 +0000 UTC" firstStartedPulling="2025-09-30 17:20:28.421828139 +0000 UTC m=+1024.326874073" lastFinishedPulling="2025-09-30 17:20:29.54486393 +0000 UTC m=+1025.449909874" observedRunningTime="2025-09-30 17:20:31.299104027 +0000 UTC m=+1027.204149971" watchObservedRunningTime="2025-09-30 17:20:31.306977113 +0000 UTC m=+1027.212023057" Sep 30 17:20:31 crc kubenswrapper[4821]: I0930 17:20:31.329985 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.329965945 podStartE2EDuration="4.329965945s" podCreationTimestamp="2025-09-30 17:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:31.321468763 +0000 UTC m=+1027.226514697" watchObservedRunningTime="2025-09-30 17:20:31.329965945 +0000 UTC m=+1027.235011889" Sep 30 17:20:32 crc kubenswrapper[4821]: I0930 17:20:32.288812 4821 generic.go:334] "Generic (PLEG): container finished" podID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerID="1c24ceef7856b575ffd84a4eeac247e6454d7967a0e7976919e20257b42d15ce" exitCode=143 Sep 30 17:20:32 crc kubenswrapper[4821]: I0930 17:20:32.289360 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerDied","Data":"1c24ceef7856b575ffd84a4eeac247e6454d7967a0e7976919e20257b42d15ce"} Sep 30 17:20:32 crc kubenswrapper[4821]: I0930 17:20:32.847855 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.311428 4821 generic.go:334] "Generic (PLEG): container finished" podID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerID="9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba" exitCode=137 Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.311759 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerDied","Data":"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba"} Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.325135 4821 generic.go:334] "Generic (PLEG): container finished" podID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerID="a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6" exitCode=137 Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.325185 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerDied","Data":"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6"} Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.879643 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.887554 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969137 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs\") pod \"c1e20153-619c-4c3a-93ef-39c4b87d535e\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969471 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts\") pod \"c1e20153-619c-4c3a-93ef-39c4b87d535e\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969481 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs" (OuterVolumeSpecName: "logs") pod "c1e20153-619c-4c3a-93ef-39c4b87d535e" (UID: "c1e20153-619c-4c3a-93ef-39c4b87d535e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969535 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key\") pod \"c1e20153-619c-4c3a-93ef-39c4b87d535e\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969558 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data\") pod \"c1e20153-619c-4c3a-93ef-39c4b87d535e\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969592 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts\") pod \"145b7040-eb73-4b29-9e7a-a96d867530c5\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969648 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key\") pod \"145b7040-eb73-4b29-9e7a-a96d867530c5\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969721 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr847\" (UniqueName: \"kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847\") pod \"145b7040-eb73-4b29-9e7a-a96d867530c5\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969739 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clnqt\" (UniqueName: \"kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt\") pod \"c1e20153-619c-4c3a-93ef-39c4b87d535e\" (UID: \"c1e20153-619c-4c3a-93ef-39c4b87d535e\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.969771 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data\") pod \"145b7040-eb73-4b29-9e7a-a96d867530c5\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.970132 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs\") pod \"145b7040-eb73-4b29-9e7a-a96d867530c5\" (UID: \"145b7040-eb73-4b29-9e7a-a96d867530c5\") " Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.970449 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e20153-619c-4c3a-93ef-39c4b87d535e-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.970874 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs" (OuterVolumeSpecName: "logs") pod "145b7040-eb73-4b29-9e7a-a96d867530c5" (UID: "145b7040-eb73-4b29-9e7a-a96d867530c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.978863 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt" (OuterVolumeSpecName: "kube-api-access-clnqt") pod "c1e20153-619c-4c3a-93ef-39c4b87d535e" (UID: "c1e20153-619c-4c3a-93ef-39c4b87d535e"). InnerVolumeSpecName "kube-api-access-clnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.990186 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847" (OuterVolumeSpecName: "kube-api-access-jr847") pod "145b7040-eb73-4b29-9e7a-a96d867530c5" (UID: "145b7040-eb73-4b29-9e7a-a96d867530c5"). InnerVolumeSpecName "kube-api-access-jr847". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.991219 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1e20153-619c-4c3a-93ef-39c4b87d535e" (UID: "c1e20153-619c-4c3a-93ef-39c4b87d535e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.991265 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "145b7040-eb73-4b29-9e7a-a96d867530c5" (UID: "145b7040-eb73-4b29-9e7a-a96d867530c5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.996166 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data" (OuterVolumeSpecName: "config-data") pod "145b7040-eb73-4b29-9e7a-a96d867530c5" (UID: "145b7040-eb73-4b29-9e7a-a96d867530c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:34 crc kubenswrapper[4821]: I0930 17:20:34.996277 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data" (OuterVolumeSpecName: "config-data") pod "c1e20153-619c-4c3a-93ef-39c4b87d535e" (UID: "c1e20153-619c-4c3a-93ef-39c4b87d535e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.003123 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts" (OuterVolumeSpecName: "scripts") pod "145b7040-eb73-4b29-9e7a-a96d867530c5" (UID: "145b7040-eb73-4b29-9e7a-a96d867530c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.004038 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts" (OuterVolumeSpecName: "scripts") pod "c1e20153-619c-4c3a-93ef-39c4b87d535e" (UID: "c1e20153-619c-4c3a-93ef-39c4b87d535e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072105 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072139 4821 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e20153-619c-4c3a-93ef-39c4b87d535e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072150 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e20153-619c-4c3a-93ef-39c4b87d535e-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072159 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072167 4821 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/145b7040-eb73-4b29-9e7a-a96d867530c5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072176 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr847\" (UniqueName: \"kubernetes.io/projected/145b7040-eb73-4b29-9e7a-a96d867530c5-kube-api-access-jr847\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072185 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clnqt\" (UniqueName: \"kubernetes.io/projected/c1e20153-619c-4c3a-93ef-39c4b87d535e-kube-api-access-clnqt\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072192 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145b7040-eb73-4b29-9e7a-a96d867530c5-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.072200 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/145b7040-eb73-4b29-9e7a-a96d867530c5-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.340824 4821 generic.go:334] "Generic (PLEG): container finished" podID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerID="2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33" exitCode=137 Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.340879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerDied","Data":"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33"} Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.340906 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698bcb95c-njjrf" event={"ID":"c1e20153-619c-4c3a-93ef-39c4b87d535e","Type":"ContainerDied","Data":"beb1c7a432107f7a89041076a65ac4dd2de6b48302e9303f1ff81687db95c531"} Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.340922 4821 scope.go:117] "RemoveContainer" containerID="2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.341031 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698bcb95c-njjrf" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.346491 4821 generic.go:334] "Generic (PLEG): container finished" podID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerID="44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17" exitCode=137 Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.346530 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerDied","Data":"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17"} Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.346555 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ddb7997dc-cnx5j" event={"ID":"145b7040-eb73-4b29-9e7a-a96d867530c5","Type":"ContainerDied","Data":"0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415"} Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.346763 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ddb7997dc-cnx5j" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.387469 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.394677 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7698bcb95c-njjrf"] Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.403141 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.408213 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ddb7997dc-cnx5j"] Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.531024 4821 scope.go:117] "RemoveContainer" containerID="9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.643544 4821 scope.go:117] "RemoveContainer" containerID="2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33" Sep 30 17:20:35 crc kubenswrapper[4821]: E0930 17:20:35.643991 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33\": container with ID starting with 2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33 not found: ID does not exist" containerID="2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.644026 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33"} err="failed to get container status \"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33\": rpc error: code = NotFound desc = could not find container \"2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33\": container with ID starting with 2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33 not found: ID does not exist" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.644048 4821 scope.go:117] "RemoveContainer" containerID="9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba" Sep 30 17:20:35 crc kubenswrapper[4821]: E0930 17:20:35.644608 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba\": container with ID starting with 9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba not found: ID does not exist" containerID="9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.644664 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba"} err="failed to get container status \"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba\": rpc error: code = NotFound desc = could not find container \"9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba\": container with ID starting with 9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba not found: ID does not exist" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.644700 4821 scope.go:117] "RemoveContainer" containerID="44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.833708 4821 scope.go:117] "RemoveContainer" containerID="a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.936829 4821 scope.go:117] "RemoveContainer" containerID="44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17" Sep 30 17:20:35 crc kubenswrapper[4821]: E0930 17:20:35.937570 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17\": container with ID starting with 44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17 not found: ID does not exist" containerID="44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.937607 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17"} err="failed to get container status \"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17\": rpc error: code = NotFound desc = could not find container \"44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17\": container with ID starting with 44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17 not found: ID does not exist" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.937634 4821 scope.go:117] "RemoveContainer" containerID="a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6" Sep 30 17:20:35 crc kubenswrapper[4821]: E0930 17:20:35.937992 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6\": container with ID starting with a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6 not found: ID does not exist" containerID="a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.938019 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6"} err="failed to get container status \"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6\": rpc error: code = NotFound desc = could not find container \"a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6\": container with ID starting with a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6 not found: ID does not exist" Sep 30 17:20:35 crc kubenswrapper[4821]: I0930 17:20:35.940539 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.088385 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.090886 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs\") pod \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.091030 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data\") pod \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.091092 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts\") pod \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.091113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczg9\" (UniqueName: \"kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9\") pod \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.091161 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key\") pod \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\" (UID: \"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f\") " Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.091513 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs" (OuterVolumeSpecName: "logs") pod "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" (UID: "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.096767 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" (UID: "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.097532 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9" (OuterVolumeSpecName: "kube-api-access-mczg9") pod "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" (UID: "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f"). InnerVolumeSpecName "kube-api-access-mczg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.136225 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data" (OuterVolumeSpecName: "config-data") pod "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" (UID: "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.147845 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts" (OuterVolumeSpecName: "scripts") pod "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" (UID: "f9366bc1-d7bf-412a-bf0d-a122e3a3d10f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.192995 4821 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.193025 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.193034 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.193043 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.193051 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczg9\" (UniqueName: \"kubernetes.io/projected/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f-kube-api-access-mczg9\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.355284 4821 generic.go:334] "Generic (PLEG): container finished" podID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerID="454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" exitCode=137 Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.356398 4821 generic.go:334] "Generic (PLEG): container finished" podID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerID="753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" exitCode=137 Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.355351 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerDied","Data":"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed"} Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.356582 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerDied","Data":"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a"} Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.356664 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9887b9bbf-xcxhr" event={"ID":"f9366bc1-d7bf-412a-bf0d-a122e3a3d10f","Type":"ContainerDied","Data":"7e78df174183dfec0fc8626a059b82b9426d889f11b8643f676cfffb8461de3a"} Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.356730 4821 scope.go:117] "RemoveContainer" containerID="454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.355319 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9887b9bbf-xcxhr" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.394178 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.401187 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9887b9bbf-xcxhr"] Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.526270 4821 scope.go:117] "RemoveContainer" containerID="753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.544525 4821 scope.go:117] "RemoveContainer" containerID="454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" Sep 30 17:20:36 crc kubenswrapper[4821]: E0930 17:20:36.544984 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed\": container with ID starting with 454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed not found: ID does not exist" containerID="454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545022 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed"} err="failed to get container status \"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed\": rpc error: code = NotFound desc = could not find container \"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed\": container with ID starting with 454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed not found: ID does not exist" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545048 4821 scope.go:117] "RemoveContainer" containerID="753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" Sep 30 17:20:36 crc kubenswrapper[4821]: E0930 17:20:36.545414 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a\": container with ID starting with 753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a not found: ID does not exist" containerID="753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545444 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a"} err="failed to get container status \"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a\": rpc error: code = NotFound desc = could not find container \"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a\": container with ID starting with 753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a not found: ID does not exist" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545463 4821 scope.go:117] "RemoveContainer" containerID="454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545709 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed"} err="failed to get container status \"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed\": rpc error: code = NotFound desc = could not find container \"454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed\": container with ID starting with 454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed not found: ID does not exist" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.545752 4821 scope.go:117] "RemoveContainer" containerID="753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.546064 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a"} err="failed to get container status \"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a\": rpc error: code = NotFound desc = could not find container \"753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a\": container with ID starting with 753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a not found: ID does not exist" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.717333 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" path="/var/lib/kubelet/pods/145b7040-eb73-4b29-9e7a-a96d867530c5/volumes" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.717975 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" path="/var/lib/kubelet/pods/c1e20153-619c-4c3a-93ef-39c4b87d535e/volumes" Sep 30 17:20:36 crc kubenswrapper[4821]: I0930 17:20:36.718571 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" path="/var/lib/kubelet/pods/f9366bc1-d7bf-412a-bf0d-a122e3a3d10f/volumes" Sep 30 17:20:37 crc kubenswrapper[4821]: I0930 17:20:37.939267 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.009469 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.009687 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="dnsmasq-dns" containerID="cri-o://8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377" gracePeriod=10 Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.207684 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.258569 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.382311 4821 generic.go:334] "Generic (PLEG): container finished" podID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerID="8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377" exitCode=0 Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.382542 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="cinder-scheduler" containerID="cri-o://fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359" gracePeriod=30 Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.382849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerDied","Data":"8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377"} Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.383173 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="probe" containerID="cri-o://8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15" gracePeriod=30 Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.526524 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.639989 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config\") pod \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.640208 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb\") pod \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.640242 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hrvl\" (UniqueName: \"kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl\") pod \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.640353 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc\") pod \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.640376 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb\") pod \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\" (UID: \"882bc52d-7f8b-461d-b7ae-b1e8660897ef\") " Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.645607 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl" (OuterVolumeSpecName: "kube-api-access-9hrvl") pod "882bc52d-7f8b-461d-b7ae-b1e8660897ef" (UID: "882bc52d-7f8b-461d-b7ae-b1e8660897ef"). InnerVolumeSpecName "kube-api-access-9hrvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.698724 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "882bc52d-7f8b-461d-b7ae-b1e8660897ef" (UID: "882bc52d-7f8b-461d-b7ae-b1e8660897ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.699565 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "882bc52d-7f8b-461d-b7ae-b1e8660897ef" (UID: "882bc52d-7f8b-461d-b7ae-b1e8660897ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.706649 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "882bc52d-7f8b-461d-b7ae-b1e8660897ef" (UID: "882bc52d-7f8b-461d-b7ae-b1e8660897ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.716686 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config" (OuterVolumeSpecName: "config") pod "882bc52d-7f8b-461d-b7ae-b1e8660897ef" (UID: "882bc52d-7f8b-461d-b7ae-b1e8660897ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.742042 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.742071 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.742096 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.742107 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882bc52d-7f8b-461d-b7ae-b1e8660897ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:38 crc kubenswrapper[4821]: I0930 17:20:38.742117 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hrvl\" (UniqueName: \"kubernetes.io/projected/882bc52d-7f8b-461d-b7ae-b1e8660897ef-kube-api-access-9hrvl\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.401561 4821 generic.go:334] "Generic (PLEG): container finished" podID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerID="8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15" exitCode=0 Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.402746 4821 generic.go:334] "Generic (PLEG): container finished" podID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerID="fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359" exitCode=0 Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.401650 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerDied","Data":"8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15"} Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.402842 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerDied","Data":"fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359"} Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.409324 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" event={"ID":"882bc52d-7f8b-461d-b7ae-b1e8660897ef","Type":"ContainerDied","Data":"0887568f576da04fc4ccca20e9ceb88c0879f51fbda4513aabf11bcde07bfd77"} Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.409384 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-vnhhm" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.409394 4821 scope.go:117] "RemoveContainer" containerID="8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.529597 4821 scope.go:117] "RemoveContainer" containerID="22e70be9abf5816120b58f092584fd10bc6ac6322e413fe738018bb231000aed" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.537092 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.555160 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.562945 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-vnhhm"] Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.656802 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.657165 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.657193 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.657273 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.657305 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rv7s\" (UniqueName: \"kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.657385 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle\") pod \"b63853ff-e8ce-44a7-8ab1-ced03324999d\" (UID: \"b63853ff-e8ce-44a7-8ab1-ced03324999d\") " Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.659061 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.665838 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts" (OuterVolumeSpecName: "scripts") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.671489 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s" (OuterVolumeSpecName: "kube-api-access-2rv7s") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "kube-api-access-2rv7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.686272 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.739181 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.758922 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.758952 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.758961 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b63853ff-e8ce-44a7-8ab1-ced03324999d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.758970 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rv7s\" (UniqueName: \"kubernetes.io/projected/b63853ff-e8ce-44a7-8ab1-ced03324999d-kube-api-access-2rv7s\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.758979 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.762367 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data" (OuterVolumeSpecName: "config-data") pod "b63853ff-e8ce-44a7-8ab1-ced03324999d" (UID: "b63853ff-e8ce-44a7-8ab1-ced03324999d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.803716 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5785886597-f9l4l" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.865168 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63853ff-e8ce-44a7-8ab1-ced03324999d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.877999 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.878577 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-687c4d45cb-97qzc" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-api" containerID="cri-o://22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252" gracePeriod=30 Sep 30 17:20:39 crc kubenswrapper[4821]: I0930 17:20:39.878699 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-687c4d45cb-97qzc" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-httpd" containerID="cri-o://c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915" gracePeriod=30 Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.420943 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b63853ff-e8ce-44a7-8ab1-ced03324999d","Type":"ContainerDied","Data":"0b65a7de65d541fb68937534e005db9f328ced4a2d5f719101c77af0274bb121"} Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.421002 4821 scope.go:117] "RemoveContainer" containerID="8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.421138 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.422824 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerID="c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915" exitCode=0 Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.422891 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerDied","Data":"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915"} Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.453175 4821 scope.go:117] "RemoveContainer" containerID="fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.474800 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.500311 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.518576 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.518942 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.518958 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.518975 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="init" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.518981 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="init" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.518992 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="probe" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.518999 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="probe" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519007 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519014 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519021 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="dnsmasq-dns" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519027 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="dnsmasq-dns" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519040 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519046 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519057 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519063 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519074 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519095 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519104 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="cinder-scheduler" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519110 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="cinder-scheduler" Sep 30 17:20:40 crc kubenswrapper[4821]: E0930 17:20:40.519132 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519137 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519310 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="probe" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519324 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519334 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" containerName="dnsmasq-dns" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519341 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519350 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9366bc1-d7bf-412a-bf0d-a122e3a3d10f" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519362 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="145b7040-eb73-4b29-9e7a-a96d867530c5" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519373 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon-log" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519379 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" containerName="cinder-scheduler" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.519388 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e20153-619c-4c3a-93ef-39c4b87d535e" containerName="horizon" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.520467 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.528844 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.548357 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.680779 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-scripts\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.680867 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.680905 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.680941 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/930ee415-1c81-4995-86aa-9ba2f22e81f0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.680969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.681020 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlk88\" (UniqueName: \"kubernetes.io/projected/930ee415-1c81-4995-86aa-9ba2f22e81f0-kube-api-access-rlk88\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.717487 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882bc52d-7f8b-461d-b7ae-b1e8660897ef" path="/var/lib/kubelet/pods/882bc52d-7f8b-461d-b7ae-b1e8660897ef/volumes" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.718259 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63853ff-e8ce-44a7-8ab1-ced03324999d" path="/var/lib/kubelet/pods/b63853ff-e8ce-44a7-8ab1-ced03324999d/volumes" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-scripts\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782744 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782777 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/930ee415-1c81-4995-86aa-9ba2f22e81f0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782809 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.782852 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlk88\" (UniqueName: \"kubernetes.io/projected/930ee415-1c81-4995-86aa-9ba2f22e81f0-kube-api-access-rlk88\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.783671 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/930ee415-1c81-4995-86aa-9ba2f22e81f0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.787188 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-scripts\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.787331 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.788882 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.789227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930ee415-1c81-4995-86aa-9ba2f22e81f0-config-data\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.804612 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlk88\" (UniqueName: \"kubernetes.io/projected/930ee415-1c81-4995-86aa-9ba2f22e81f0-kube-api-access-rlk88\") pod \"cinder-scheduler-0\" (UID: \"930ee415-1c81-4995-86aa-9ba2f22e81f0\") " pod="openstack/cinder-scheduler-0" Sep 30 17:20:40 crc kubenswrapper[4821]: I0930 17:20:40.852457 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:20:41 crc kubenswrapper[4821]: I0930 17:20:41.283628 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 30 17:20:41 crc kubenswrapper[4821]: I0930 17:20:41.347541 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:20:41 crc kubenswrapper[4821]: I0930 17:20:41.471990 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"930ee415-1c81-4995-86aa-9ba2f22e81f0","Type":"ContainerStarted","Data":"18d5668bf52bdb6f401bf08653eabb486965034bcb2d22b0437a4a7e7a6f6886"} Sep 30 17:20:42 crc kubenswrapper[4821]: I0930 17:20:42.496063 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"930ee415-1c81-4995-86aa-9ba2f22e81f0","Type":"ContainerStarted","Data":"3641e548c1ce56260a35b7f0c39821560c168a1e4e256c055793aad29696e36c"} Sep 30 17:20:42 crc kubenswrapper[4821]: I0930 17:20:42.999650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.007007 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.020217 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config\") pod \"bf0686e3-68e9-45aa-a625-ba24fc284342\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.020254 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config\") pod \"bf0686e3-68e9-45aa-a625-ba24fc284342\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.020318 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs\") pod \"bf0686e3-68e9-45aa-a625-ba24fc284342\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.020339 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpfhm\" (UniqueName: \"kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm\") pod \"bf0686e3-68e9-45aa-a625-ba24fc284342\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.020355 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle\") pod \"bf0686e3-68e9-45aa-a625-ba24fc284342\" (UID: \"bf0686e3-68e9-45aa-a625-ba24fc284342\") " Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.039848 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bf0686e3-68e9-45aa-a625-ba24fc284342" (UID: "bf0686e3-68e9-45aa-a625-ba24fc284342"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.042456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm" (OuterVolumeSpecName: "kube-api-access-dpfhm") pod "bf0686e3-68e9-45aa-a625-ba24fc284342" (UID: "bf0686e3-68e9-45aa-a625-ba24fc284342"). InnerVolumeSpecName "kube-api-access-dpfhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.049123 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpfhm\" (UniqueName: \"kubernetes.io/projected/bf0686e3-68e9-45aa-a625-ba24fc284342-kube-api-access-dpfhm\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.049148 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-httpd-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.105803 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0686e3-68e9-45aa-a625-ba24fc284342" (UID: "bf0686e3-68e9-45aa-a625-ba24fc284342"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.121634 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.151596 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.192410 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config" (OuterVolumeSpecName: "config") pod "bf0686e3-68e9-45aa-a625-ba24fc284342" (UID: "bf0686e3-68e9-45aa-a625-ba24fc284342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.201670 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bf0686e3-68e9-45aa-a625-ba24fc284342" (UID: "bf0686e3-68e9-45aa-a625-ba24fc284342"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.252843 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.253178 4821 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf0686e3-68e9-45aa-a625-ba24fc284342-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.525600 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"930ee415-1c81-4995-86aa-9ba2f22e81f0","Type":"ContainerStarted","Data":"6ee04ac59975ab6a322ee89b3c4d0e1006e1a152a58b44bd735701e4f325d4b8"} Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.534895 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerID="22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252" exitCode=0 Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.534941 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerDied","Data":"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252"} Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.534958 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c4d45cb-97qzc" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.534968 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c4d45cb-97qzc" event={"ID":"bf0686e3-68e9-45aa-a625-ba24fc284342","Type":"ContainerDied","Data":"c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a"} Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.534986 4821 scope.go:117] "RemoveContainer" containerID="c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.548628 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.548614631 podStartE2EDuration="3.548614631s" podCreationTimestamp="2025-09-30 17:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:43.547634346 +0000 UTC m=+1039.452680290" watchObservedRunningTime="2025-09-30 17:20:43.548614631 +0000 UTC m=+1039.453660575" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.583323 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.584767 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-687c4d45cb-97qzc"] Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.613592 4821 scope.go:117] "RemoveContainer" containerID="22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.668711 4821 scope.go:117] "RemoveContainer" containerID="c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915" Sep 30 17:20:43 crc kubenswrapper[4821]: E0930 17:20:43.669411 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915\": container with ID starting with c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915 not found: ID does not exist" containerID="c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.669442 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915"} err="failed to get container status \"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915\": rpc error: code = NotFound desc = could not find container \"c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915\": container with ID starting with c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915 not found: ID does not exist" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.669466 4821 scope.go:117] "RemoveContainer" containerID="22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252" Sep 30 17:20:43 crc kubenswrapper[4821]: E0930 17:20:43.669937 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252\": container with ID starting with 22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252 not found: ID does not exist" containerID="22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.669961 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252"} err="failed to get container status \"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252\": rpc error: code = NotFound desc = could not find container \"22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252\": container with ID starting with 22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252 not found: ID does not exist" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.684777 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:43 crc kubenswrapper[4821]: I0930 17:20:43.694778 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d47786ff6-8tnsh" Sep 30 17:20:44 crc kubenswrapper[4821]: I0930 17:20:44.716062 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" path="/var/lib/kubelet/pods/bf0686e3-68e9-45aa-a625-ba24fc284342/volumes" Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.149632 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78b9594fb8-nw9qj" Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.236994 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.237238 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon-log" containerID="cri-o://3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341" gracePeriod=30 Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.237572 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" containerID="cri-o://8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0" gracePeriod=30 Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.257414 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Sep 30 17:20:45 crc kubenswrapper[4821]: I0930 17:20:45.853303 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 17:20:47 crc kubenswrapper[4821]: I0930 17:20:47.437620 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-dcc94dcb7-4dt65" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.529047 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: E0930 17:20:48.532096 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-httpd" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.532255 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-httpd" Sep 30 17:20:48 crc kubenswrapper[4821]: E0930 17:20:48.532344 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-api" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.532400 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-api" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.532667 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-api" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.532732 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0686e3-68e9-45aa-a625-ba24fc284342" containerName="neutron-httpd" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.533431 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.537567 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-th2zh" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.537638 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.537567 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.546326 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.639189 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.639229 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.639257 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.639395 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfg4v\" (UniqueName: \"kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.658195 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:39898->10.217.0.142:8443: read: connection reset by peer" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.735724 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: E0930 17:20:48.736513 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-qfg4v openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="ddaf91ae-a48b-4341-a72e-1e0068e295f9" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.741158 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.741196 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.741218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.741262 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfg4v\" (UniqueName: \"kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.742367 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: E0930 17:20:48.743638 4821 projected.go:194] Error preparing data for projected volume kube-api-access-qfg4v for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Sep 30 17:20:48 crc kubenswrapper[4821]: E0930 17:20:48.743687 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v podName:ddaf91ae-a48b-4341-a72e-1e0068e295f9 nodeName:}" failed. No retries permitted until 2025-09-30 17:20:49.243673167 +0000 UTC m=+1045.148719101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfg4v" (UniqueName: "kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v") pod "openstackclient" (UID: "ddaf91ae-a48b-4341-a72e-1e0068e295f9") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.745353 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.749895 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.755779 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.812658 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.813980 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.820412 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.843230 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.843339 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.845667 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxwd\" (UniqueName: \"kubernetes.io/projected/8e0d1319-9b26-4169-8ccd-82687b2d7986-kube-api-access-qbxwd\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.845796 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.947697 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.947782 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.947826 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxwd\" (UniqueName: \"kubernetes.io/projected/8e0d1319-9b26-4169-8ccd-82687b2d7986-kube-api-access-qbxwd\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.947874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.948662 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.964683 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.965297 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0d1319-9b26-4169-8ccd-82687b2d7986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:48 crc kubenswrapper[4821]: I0930 17:20:48.970238 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxwd\" (UniqueName: \"kubernetes.io/projected/8e0d1319-9b26-4169-8ccd-82687b2d7986-kube-api-access-qbxwd\") pod \"openstackclient\" (UID: \"8e0d1319-9b26-4169-8ccd-82687b2d7986\") " pod="openstack/openstackclient" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.109130 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.148783 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.252463 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfg4v\" (UniqueName: \"kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v\") pod \"openstackclient\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " pod="openstack/openstackclient" Sep 30 17:20:49 crc kubenswrapper[4821]: E0930 17:20:49.257916 4821 projected.go:194] Error preparing data for projected volume kube-api-access-qfg4v for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ddaf91ae-a48b-4341-a72e-1e0068e295f9) does not match the UID in record. The object might have been deleted and then recreated Sep 30 17:20:49 crc kubenswrapper[4821]: E0930 17:20:49.257994 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v podName:ddaf91ae-a48b-4341-a72e-1e0068e295f9 nodeName:}" failed. No retries permitted until 2025-09-30 17:20:50.257976643 +0000 UTC m=+1046.163022587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qfg4v" (UniqueName: "kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v") pod "openstackclient" (UID: "ddaf91ae-a48b-4341-a72e-1e0068e295f9") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (ddaf91ae-a48b-4341-a72e-1e0068e295f9) does not match the UID in record. The object might have been deleted and then recreated Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.595603 4821 generic.go:334] "Generic (PLEG): container finished" podID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerID="8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0" exitCode=0 Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.595681 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.595673 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerDied","Data":"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0"} Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.602757 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ddaf91ae-a48b-4341-a72e-1e0068e295f9" podUID="8e0d1319-9b26-4169-8ccd-82687b2d7986" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.605923 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.618268 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.657971 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret\") pod \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.658094 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle\") pod \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.658141 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config\") pod \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\" (UID: \"ddaf91ae-a48b-4341-a72e-1e0068e295f9\") " Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.658518 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfg4v\" (UniqueName: \"kubernetes.io/projected/ddaf91ae-a48b-4341-a72e-1e0068e295f9-kube-api-access-qfg4v\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.658727 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ddaf91ae-a48b-4341-a72e-1e0068e295f9" (UID: "ddaf91ae-a48b-4341-a72e-1e0068e295f9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.663587 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ddaf91ae-a48b-4341-a72e-1e0068e295f9" (UID: "ddaf91ae-a48b-4341-a72e-1e0068e295f9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.664367 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddaf91ae-a48b-4341-a72e-1e0068e295f9" (UID: "ddaf91ae-a48b-4341-a72e-1e0068e295f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.761640 4821 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.761868 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddaf91ae-a48b-4341-a72e-1e0068e295f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:49 crc kubenswrapper[4821]: I0930 17:20:49.761880 4821 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ddaf91ae-a48b-4341-a72e-1e0068e295f9-openstack-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:50 crc kubenswrapper[4821]: I0930 17:20:50.614523 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e0d1319-9b26-4169-8ccd-82687b2d7986","Type":"ContainerStarted","Data":"8417cfbc6bb0e5a9ed2d5daf17bcfdbd06b3a05d85e1d1516c076c9530b57690"} Sep 30 17:20:50 crc kubenswrapper[4821]: I0930 17:20:50.614552 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:20:50 crc kubenswrapper[4821]: I0930 17:20:50.617784 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ddaf91ae-a48b-4341-a72e-1e0068e295f9" podUID="8e0d1319-9b26-4169-8ccd-82687b2d7986" Sep 30 17:20:50 crc kubenswrapper[4821]: I0930 17:20:50.720178 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddaf91ae-a48b-4341-a72e-1e0068e295f9" path="/var/lib/kubelet/pods/ddaf91ae-a48b-4341-a72e-1e0068e295f9/volumes" Sep 30 17:20:51 crc kubenswrapper[4821]: I0930 17:20:51.165606 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 17:20:52 crc kubenswrapper[4821]: I0930 17:20:52.922412 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wdd45"] Sep 30 17:20:52 crc kubenswrapper[4821]: I0930 17:20:52.923789 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wdd45" Sep 30 17:20:52 crc kubenswrapper[4821]: I0930 17:20:52.934105 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wdd45"] Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.024945 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms69f\" (UniqueName: \"kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f\") pod \"nova-api-db-create-wdd45\" (UID: \"a00a4364-421a-47dc-8dc9-a8a54f71f563\") " pod="openstack/nova-api-db-create-wdd45" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.030609 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-knwz7"] Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.031694 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.052932 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-knwz7"] Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.126329 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp89\" (UniqueName: \"kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89\") pod \"nova-cell0-db-create-knwz7\" (UID: \"44c601d1-e185-448f-81b1-1b42fbe9bb3f\") " pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.126416 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms69f\" (UniqueName: \"kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f\") pod \"nova-api-db-create-wdd45\" (UID: \"a00a4364-421a-47dc-8dc9-a8a54f71f563\") " pod="openstack/nova-api-db-create-wdd45" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.155029 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms69f\" (UniqueName: \"kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f\") pod \"nova-api-db-create-wdd45\" (UID: \"a00a4364-421a-47dc-8dc9-a8a54f71f563\") " pod="openstack/nova-api-db-create-wdd45" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.228328 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp89\" (UniqueName: \"kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89\") pod \"nova-cell0-db-create-knwz7\" (UID: \"44c601d1-e185-448f-81b1-1b42fbe9bb3f\") " pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.228621 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sx76g"] Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.229604 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.238983 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wdd45" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.253215 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sx76g"] Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.266932 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp89\" (UniqueName: \"kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89\") pod \"nova-cell0-db-create-knwz7\" (UID: \"44c601d1-e185-448f-81b1-1b42fbe9bb3f\") " pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.330978 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkp7\" (UniqueName: \"kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7\") pod \"nova-cell1-db-create-sx76g\" (UID: \"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e\") " pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.353422 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.438334 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkp7\" (UniqueName: \"kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7\") pod \"nova-cell1-db-create-sx76g\" (UID: \"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e\") " pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.460059 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkp7\" (UniqueName: \"kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7\") pod \"nova-cell1-db-create-sx76g\" (UID: \"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e\") " pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.555483 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:20:53 crc kubenswrapper[4821]: I0930 17:20:53.893913 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wdd45"] Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.009030 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-knwz7"] Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.140155 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sx76g"] Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.664249 4821 generic.go:334] "Generic (PLEG): container finished" podID="81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" containerID="bcb3a3caa45f4d3faba7906f1180cb61d00db893a87459507a7a966f6d865c16" exitCode=0 Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.664346 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sx76g" event={"ID":"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e","Type":"ContainerDied","Data":"bcb3a3caa45f4d3faba7906f1180cb61d00db893a87459507a7a966f6d865c16"} Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.664373 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sx76g" event={"ID":"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e","Type":"ContainerStarted","Data":"dde8edee8ca8feb7cd4d12ed26964689ac4a5f38814c401f37af22a82bce5260"} Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.667501 4821 generic.go:334] "Generic (PLEG): container finished" podID="44c601d1-e185-448f-81b1-1b42fbe9bb3f" containerID="6058b6cefab7c1516d9467bc7f648ea77502334eb371b5559250ddaf2cb601df" exitCode=0 Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.667609 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-knwz7" event={"ID":"44c601d1-e185-448f-81b1-1b42fbe9bb3f","Type":"ContainerDied","Data":"6058b6cefab7c1516d9467bc7f648ea77502334eb371b5559250ddaf2cb601df"} Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.667633 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-knwz7" event={"ID":"44c601d1-e185-448f-81b1-1b42fbe9bb3f","Type":"ContainerStarted","Data":"cfa302691cd69fa4d03f82a062626647a65955edaba8e03f0fd1516b66d6c25b"} Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.672525 4821 generic.go:334] "Generic (PLEG): container finished" podID="a00a4364-421a-47dc-8dc9-a8a54f71f563" containerID="cf90332090bbed33666f9731a7ea4a122519d567a5e1d351c750a17823a716e3" exitCode=0 Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.672552 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wdd45" event={"ID":"a00a4364-421a-47dc-8dc9-a8a54f71f563","Type":"ContainerDied","Data":"cf90332090bbed33666f9731a7ea4a122519d567a5e1d351c750a17823a716e3"} Sep 30 17:20:54 crc kubenswrapper[4821]: I0930 17:20:54.672569 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wdd45" event={"ID":"a00a4364-421a-47dc-8dc9-a8a54f71f563","Type":"ContainerStarted","Data":"e7a19da773748a0acf57ff0a23af4651e683225f555b4e1e09ac7347e14306a4"} Sep 30 17:20:59 crc kubenswrapper[4821]: I0930 17:20:59.110347 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.041702 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.062275 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wdd45" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.071564 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwkp7\" (UniqueName: \"kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7\") pod \"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e\" (UID: \"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e\") " Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.082870 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.092900 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7" (OuterVolumeSpecName: "kube-api-access-dwkp7") pod "81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" (UID: "81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e"). InnerVolumeSpecName "kube-api-access-dwkp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.173430 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkp89\" (UniqueName: \"kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89\") pod \"44c601d1-e185-448f-81b1-1b42fbe9bb3f\" (UID: \"44c601d1-e185-448f-81b1-1b42fbe9bb3f\") " Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.173595 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms69f\" (UniqueName: \"kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f\") pod \"a00a4364-421a-47dc-8dc9-a8a54f71f563\" (UID: \"a00a4364-421a-47dc-8dc9-a8a54f71f563\") " Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.173998 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwkp7\" (UniqueName: \"kubernetes.io/projected/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e-kube-api-access-dwkp7\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.177982 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89" (OuterVolumeSpecName: "kube-api-access-pkp89") pod "44c601d1-e185-448f-81b1-1b42fbe9bb3f" (UID: "44c601d1-e185-448f-81b1-1b42fbe9bb3f"). InnerVolumeSpecName "kube-api-access-pkp89". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.178107 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f" (OuterVolumeSpecName: "kube-api-access-ms69f") pod "a00a4364-421a-47dc-8dc9-a8a54f71f563" (UID: "a00a4364-421a-47dc-8dc9-a8a54f71f563"). InnerVolumeSpecName "kube-api-access-ms69f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.275619 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkp89\" (UniqueName: \"kubernetes.io/projected/44c601d1-e185-448f-81b1-1b42fbe9bb3f-kube-api-access-pkp89\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.275998 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms69f\" (UniqueName: \"kubernetes.io/projected/a00a4364-421a-47dc-8dc9-a8a54f71f563-kube-api-access-ms69f\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.735288 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e0d1319-9b26-4169-8ccd-82687b2d7986","Type":"ContainerStarted","Data":"3c4d139a4e4eb1d8c6a6bc470688135204d767c99c7357577def48ffe5ec4dae"} Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.738429 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sx76g" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.739341 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sx76g" event={"ID":"81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e","Type":"ContainerDied","Data":"dde8edee8ca8feb7cd4d12ed26964689ac4a5f38814c401f37af22a82bce5260"} Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.739450 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dde8edee8ca8feb7cd4d12ed26964689ac4a5f38814c401f37af22a82bce5260" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.741592 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-knwz7" event={"ID":"44c601d1-e185-448f-81b1-1b42fbe9bb3f","Type":"ContainerDied","Data":"cfa302691cd69fa4d03f82a062626647a65955edaba8e03f0fd1516b66d6c25b"} Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.741639 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa302691cd69fa4d03f82a062626647a65955edaba8e03f0fd1516b66d6c25b" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.741700 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-knwz7" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.743680 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wdd45" event={"ID":"a00a4364-421a-47dc-8dc9-a8a54f71f563","Type":"ContainerDied","Data":"e7a19da773748a0acf57ff0a23af4651e683225f555b4e1e09ac7347e14306a4"} Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.743718 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a19da773748a0acf57ff0a23af4651e683225f555b4e1e09ac7347e14306a4" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.744017 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wdd45" Sep 30 17:21:00 crc kubenswrapper[4821]: I0930 17:21:00.758276 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.327056322 podStartE2EDuration="12.758255206s" podCreationTimestamp="2025-09-30 17:20:48 +0000 UTC" firstStartedPulling="2025-09-30 17:20:49.622339279 +0000 UTC m=+1045.527385223" lastFinishedPulling="2025-09-30 17:21:00.053538163 +0000 UTC m=+1055.958584107" observedRunningTime="2025-09-30 17:21:00.752634856 +0000 UTC m=+1056.657680800" watchObservedRunningTime="2025-09-30 17:21:00.758255206 +0000 UTC m=+1056.663301150" Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.382839 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63853ff_e8ce_44a7_8ab1_ced03324999d.slice/crio-fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359.scope WatchSource:0}: Error finding container fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359: Status 404 returned error can't find the container with id fcb60c3eea1196d2fa943ebfaa32fe8f6962db1cfe4ae1e460a36b29fdce6359 Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.389254 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63853ff_e8ce_44a7_8ab1_ced03324999d.slice/crio-8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15.scope WatchSource:0}: Error finding container 8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15: Status 404 returned error can't find the container with id 8cf2e2e8264341ba92599fc3b16a959200cde473d4b8897ea4886373d4587b15 Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.390577 4821 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddaf91ae_a48b_4341_a72e_1e0068e295f9.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddaf91ae_a48b_4341_a72e_1e0068e295f9.slice: no such file or directory Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.391139 4821 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda00a4364_421a_47dc_8dc9_a8a54f71f563.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda00a4364_421a_47dc_8dc9_a8a54f71f563.slice: no such file or directory Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.391180 4821 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c601d1_e185_448f_81b1_1b42fbe9bb3f.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c601d1_e185_448f_81b1_1b42fbe9bb3f.slice: no such file or directory Sep 30 17:21:01 crc kubenswrapper[4821]: W0930 17:21:01.391202 4821 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81f8ab48_a8ad_47e5_b71e_0fe3d39bff5e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81f8ab48_a8ad_47e5_b71e_0fe3d39bff5e.slice: no such file or directory Sep 30 17:21:01 crc kubenswrapper[4821]: E0930 17:21:01.607927 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice/crio-2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-conmon-c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63853ff_e8ce_44a7_8ab1_ced03324999d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice/crio-753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice/crio-conmon-9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice/crio-9ee08e4aa4a5c0e87bd8281c46e23edb718b6ba7a62ac6ae30c11cdcadc4b0ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda45b832b_22cd_47fa_bd7f_83ad23d2d135.slice/crio-7c8adb45471709c246f83939b757fbd276c1ceecc352a8177d6284b63b156583.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-conmon-44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-0a06a18fe51474c2d0b7c5c28b83a94244a67f36a58279c92932e067949d5415\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod882bc52d_7f8b_461d_b7ae_b1e8660897ef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-c35bbaf60f715bc9224b7040c1ca471a912ab509c7cafa377e2ba7a2bbfa3915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-conmon-22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod882bc52d_7f8b_461d_b7ae_b1e8660897ef.slice/crio-conmon-8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-conmon-a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-c891227b397ff7d8612625ae73d9a3854cf0489435060594c3a2f58536094d7a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63853ff_e8ce_44a7_8ab1_ced03324999d.slice/crio-0b65a7de65d541fb68937534e005db9f328ced4a2d5f719101c77af0274bb121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-44d96c13026608943cb8f4546f8344620f310d0e3451cbdd705e0b0ca636be17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice/crio-conmon-2191569581b539409bb99d26050a98cad5524588cb6a92034708d80248ac1a33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice/crio-beb1c7a432107f7a89041076a65ac4dd2de6b48302e9303f1ff81687db95c531\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice/crio-conmon-753b32079f01a570cf60474ae0baf796e92256442342c4c67cda2fff5a80e70a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda45b832b_22cd_47fa_bd7f_83ad23d2d135.slice/crio-conmon-7c8adb45471709c246f83939b757fbd276c1ceecc352a8177d6284b63b156583.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd2779c_c7a7_4d42_8e83_7cbec573d595.slice/crio-conmon-8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice/crio-conmon-454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice/crio-7e78df174183dfec0fc8626a059b82b9426d889f11b8643f676cfffb8461de3a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145b7040_eb73_4b29_9e7a_a96d867530c5.slice/crio-a0ad46de1519e1676ac4e5d92f313c7d090e740668982e886722e854d3f3afe6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e20153_619c_4c3a_93ef_39c4b87d535e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9366bc1_d7bf_412a_bf0d_a122e3a3d10f.slice/crio-454f5a23c528e73d08aae9575738d92de0eeb556525dc20af420bcd7009ccbed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod882bc52d_7f8b_461d_b7ae_b1e8660897ef.slice/crio-8bfc289c53a6ff99983f0453082ed8a9bb7c2b677b5a0fc176331a8e17ab8377.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod882bc52d_7f8b_461d_b7ae_b1e8660897ef.slice/crio-0887568f576da04fc4ccca20e9ceb88c0879f51fbda4513aabf11bcde07bfd77\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0686e3_68e9_45aa_a625_ba24fc284342.slice/crio-22ee3c074e073d03da770651fc0efe31c8dacaa5e41e82996264c698d02e5252.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.752630 4821 generic.go:334] "Generic (PLEG): container finished" podID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerID="ad1fb2e868d1f5b5a7dd362a2e669a7857d58cfff69e134bc0ae56f28a23eb7a" exitCode=137 Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.759117 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerDied","Data":"ad1fb2e868d1f5b5a7dd362a2e669a7857d58cfff69e134bc0ae56f28a23eb7a"} Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.759169 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88a60aa8-fb9c-4813-99b2-7d01ec08aa05","Type":"ContainerDied","Data":"ab1b1af1819144ba0f7109b89ab752f887bbff9b91de3d6eb088259ee616e17c"} Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.759180 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1b1af1819144ba0f7109b89ab752f887bbff9b91de3d6eb088259ee616e17c" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.787178 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.930851 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.930892 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.930960 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.931000 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.931023 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gcbv\" (UniqueName: \"kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.931114 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.931195 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data\") pod \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\" (UID: \"88a60aa8-fb9c-4813-99b2-7d01ec08aa05\") " Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.931697 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.932612 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs" (OuterVolumeSpecName: "logs") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.937380 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.958052 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv" (OuterVolumeSpecName: "kube-api-access-7gcbv") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "kube-api-access-7gcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:01 crc kubenswrapper[4821]: I0930 17:21:01.972714 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts" (OuterVolumeSpecName: "scripts") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.007120 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.033462 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data" (OuterVolumeSpecName: "config-data") pod "88a60aa8-fb9c-4813-99b2-7d01ec08aa05" (UID: "88a60aa8-fb9c-4813-99b2-7d01ec08aa05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035036 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035065 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035077 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035099 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035108 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035117 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gcbv\" (UniqueName: \"kubernetes.io/projected/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-kube-api-access-7gcbv\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.035127 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a60aa8-fb9c-4813-99b2-7d01ec08aa05-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.101247 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.101692 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-log" containerID="cri-o://cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661" gracePeriod=30 Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.101732 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-httpd" containerID="cri-o://343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81" gracePeriod=30 Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.761557 4821 generic.go:334] "Generic (PLEG): container finished" podID="72758fd1-eda8-4729-b57c-8b357ada068a" containerID="cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661" exitCode=143 Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.761634 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.762293 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerDied","Data":"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661"} Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.793481 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.803183 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.815605 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:21:02 crc kubenswrapper[4821]: E0930 17:21:02.816027 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c601d1-e185-448f-81b1-1b42fbe9bb3f" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816052 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c601d1-e185-448f-81b1-1b42fbe9bb3f" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: E0930 17:21:02.816075 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00a4364-421a-47dc-8dc9-a8a54f71f563" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816347 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00a4364-421a-47dc-8dc9-a8a54f71f563" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: E0930 17:21:02.816362 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816369 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: E0930 17:21:02.816384 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api-log" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816391 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api-log" Sep 30 17:21:02 crc kubenswrapper[4821]: E0930 17:21:02.816409 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816417 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816640 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api-log" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816665 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" containerName="cinder-api" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816680 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816695 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00a4364-421a-47dc-8dc9-a8a54f71f563" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.816705 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c601d1-e185-448f-81b1-1b42fbe9bb3f" containerName="mariadb-database-create" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.817869 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.821106 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.821277 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.828866 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.834734 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.948804 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5gv\" (UniqueName: \"kubernetes.io/projected/3943f2e8-623c-4f63-b427-a9190a41608f-kube-api-access-4q5gv\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.948850 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-scripts\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949001 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949120 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949289 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943f2e8-623c-4f63-b427-a9190a41608f-logs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949383 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943f2e8-623c-4f63-b427-a9190a41608f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:02 crc kubenswrapper[4821]: I0930 17:21:02.949461 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051412 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051491 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051589 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943f2e8-623c-4f63-b427-a9190a41608f-logs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943f2e8-623c-4f63-b427-a9190a41608f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051655 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051710 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q5gv\" (UniqueName: \"kubernetes.io/projected/3943f2e8-623c-4f63-b427-a9190a41608f-kube-api-access-4q5gv\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.051732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-scripts\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.052459 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943f2e8-623c-4f63-b427-a9190a41608f-logs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.052523 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943f2e8-623c-4f63-b427-a9190a41608f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.060958 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-scripts\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.068994 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.070922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.081301 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.088730 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q5gv\" (UniqueName: \"kubernetes.io/projected/3943f2e8-623c-4f63-b427-a9190a41608f-kube-api-access-4q5gv\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.092626 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.093106 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943f2e8-623c-4f63-b427-a9190a41608f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943f2e8-623c-4f63-b427-a9190a41608f\") " pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.134509 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9057-account-create-5jkzz"] Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.136070 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.136342 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.141500 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.144514 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9057-account-create-5jkzz"] Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.254207 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vjj\" (UniqueName: \"kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj\") pod \"nova-api-9057-account-create-5jkzz\" (UID: \"b7b7e625-a3dd-436c-9dba-8452958f101d\") " pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.355392 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vjj\" (UniqueName: \"kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj\") pod \"nova-api-9057-account-create-5jkzz\" (UID: \"b7b7e625-a3dd-436c-9dba-8452958f101d\") " pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.389204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vjj\" (UniqueName: \"kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj\") pod \"nova-api-9057-account-create-5jkzz\" (UID: \"b7b7e625-a3dd-436c-9dba-8452958f101d\") " pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.540948 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.593601 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:21:03 crc kubenswrapper[4821]: W0930 17:21:03.605699 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3943f2e8_623c_4f63_b427_a9190a41608f.slice/crio-f078f945b14132f443ce15a074b91b89440f723492922d11df17e2b2cadd79a4 WatchSource:0}: Error finding container f078f945b14132f443ce15a074b91b89440f723492922d11df17e2b2cadd79a4: Status 404 returned error can't find the container with id f078f945b14132f443ce15a074b91b89440f723492922d11df17e2b2cadd79a4 Sep 30 17:21:03 crc kubenswrapper[4821]: I0930 17:21:03.784297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943f2e8-623c-4f63-b427-a9190a41608f","Type":"ContainerStarted","Data":"f078f945b14132f443ce15a074b91b89440f723492922d11df17e2b2cadd79a4"} Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.016625 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9057-account-create-5jkzz"] Sep 30 17:21:04 crc kubenswrapper[4821]: W0930 17:21:04.025215 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b7e625_a3dd_436c_9dba_8452958f101d.slice/crio-49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3 WatchSource:0}: Error finding container 49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3: Status 404 returned error can't find the container with id 49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3 Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.107802 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.108515 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-httpd" containerID="cri-o://43450ab0975b7cc23578e8fba6e9566b3e95ad2ee96642ae6c54420a7d21d974" gracePeriod=30 Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.108355 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-log" containerID="cri-o://9f2848a58dc3ddee4ee6d82cd5cbdd897e52e9c7bd7ed2b0eb58088af14898ae" gracePeriod=30 Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.761572 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a60aa8-fb9c-4813-99b2-7d01ec08aa05" path="/var/lib/kubelet/pods/88a60aa8-fb9c-4813-99b2-7d01ec08aa05/volumes" Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.803561 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943f2e8-623c-4f63-b427-a9190a41608f","Type":"ContainerStarted","Data":"ff1a5a5fc83bc583b15feb829baf9289ed1743cd91a37f85a3cb1c7027920a19"} Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.807270 4821 generic.go:334] "Generic (PLEG): container finished" podID="0767a72c-1e7c-44a0-901c-21807a447600" containerID="9f2848a58dc3ddee4ee6d82cd5cbdd897e52e9c7bd7ed2b0eb58088af14898ae" exitCode=143 Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.807342 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerDied","Data":"9f2848a58dc3ddee4ee6d82cd5cbdd897e52e9c7bd7ed2b0eb58088af14898ae"} Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.809718 4821 generic.go:334] "Generic (PLEG): container finished" podID="b7b7e625-a3dd-436c-9dba-8452958f101d" containerID="9b0d8976e1096d6476ab0e99d8f702edd0b1789ca31b7f5fa08e54adc96c3165" exitCode=0 Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.809847 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9057-account-create-5jkzz" event={"ID":"b7b7e625-a3dd-436c-9dba-8452958f101d","Type":"ContainerDied","Data":"9b0d8976e1096d6476ab0e99d8f702edd0b1789ca31b7f5fa08e54adc96c3165"} Sep 30 17:21:04 crc kubenswrapper[4821]: I0930 17:21:04.809883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9057-account-create-5jkzz" event={"ID":"b7b7e625-a3dd-436c-9dba-8452958f101d","Type":"ContainerStarted","Data":"49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3"} Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.700244 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803681 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803777 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2c4x\" (UniqueName: \"kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803830 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803887 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803943 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.803990 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.804028 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.804073 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data\") pod \"72758fd1-eda8-4729-b57c-8b357ada068a\" (UID: \"72758fd1-eda8-4729-b57c-8b357ada068a\") " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.805405 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.806425 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs" (OuterVolumeSpecName: "logs") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.817484 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x" (OuterVolumeSpecName: "kube-api-access-t2c4x") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "kube-api-access-t2c4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.819513 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.825275 4821 generic.go:334] "Generic (PLEG): container finished" podID="72758fd1-eda8-4729-b57c-8b357ada068a" containerID="343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81" exitCode=0 Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.825338 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.825351 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerDied","Data":"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81"} Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.825582 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72758fd1-eda8-4729-b57c-8b357ada068a","Type":"ContainerDied","Data":"6e68dd3c1e24676ff4f421ee3659a4fd13f3e3d813f2d2d6fd8b1338ba6b4488"} Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.825614 4821 scope.go:117] "RemoveContainer" containerID="343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.829603 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943f2e8-623c-4f63-b427-a9190a41608f","Type":"ContainerStarted","Data":"894f8ff5c46a39737eef76b1ae33d939f5a79b6eac4fe1400f78c8b25c1b7eed"} Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.829642 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.864826 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts" (OuterVolumeSpecName: "scripts") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.891258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.906998 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.907024 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72758fd1-eda8-4729-b57c-8b357ada068a-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.907034 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2c4x\" (UniqueName: \"kubernetes.io/projected/72758fd1-eda8-4729-b57c-8b357ada068a-kube-api-access-t2c4x\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.907043 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.907051 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.907076 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.914524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.916255 4821 scope.go:117] "RemoveContainer" containerID="cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.919225 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data" (OuterVolumeSpecName: "config-data") pod "72758fd1-eda8-4729-b57c-8b357ada068a" (UID: "72758fd1-eda8-4729-b57c-8b357ada068a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.937109 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.957507 4821 scope.go:117] "RemoveContainer" containerID="343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81" Sep 30 17:21:05 crc kubenswrapper[4821]: E0930 17:21:05.958395 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81\": container with ID starting with 343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81 not found: ID does not exist" containerID="343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.958434 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81"} err="failed to get container status \"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81\": rpc error: code = NotFound desc = could not find container \"343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81\": container with ID starting with 343613df0dd5a4f886e14ee10b73f068a13da7ac4077cbf50a22140682bdab81 not found: ID does not exist" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.958457 4821 scope.go:117] "RemoveContainer" containerID="cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661" Sep 30 17:21:05 crc kubenswrapper[4821]: E0930 17:21:05.958775 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661\": container with ID starting with cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661 not found: ID does not exist" containerID="cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661" Sep 30 17:21:05 crc kubenswrapper[4821]: I0930 17:21:05.958798 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661"} err="failed to get container status \"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661\": rpc error: code = NotFound desc = could not find container \"cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661\": container with ID starting with cf41986fda4639b28a53047b338d6d6684994da1901771aea3061284b8798661 not found: ID does not exist" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.010224 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.010259 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.010272 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72758fd1-eda8-4729-b57c-8b357ada068a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.162804 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.176834 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.176817993 podStartE2EDuration="4.176817993s" podCreationTimestamp="2025-09-30 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:05.862976085 +0000 UTC m=+1061.768022029" watchObservedRunningTime="2025-09-30 17:21:06.176817993 +0000 UTC m=+1062.081863937" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.177614 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.183304 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.212250 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vjj\" (UniqueName: \"kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj\") pod \"b7b7e625-a3dd-436c-9dba-8452958f101d\" (UID: \"b7b7e625-a3dd-436c-9dba-8452958f101d\") " Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.224469 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj" (OuterVolumeSpecName: "kube-api-access-n7vjj") pod "b7b7e625-a3dd-436c-9dba-8452958f101d" (UID: "b7b7e625-a3dd-436c-9dba-8452958f101d"). InnerVolumeSpecName "kube-api-access-n7vjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.227549 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:06 crc kubenswrapper[4821]: E0930 17:21:06.227888 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-log" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.227904 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-log" Sep 30 17:21:06 crc kubenswrapper[4821]: E0930 17:21:06.227913 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b7e625-a3dd-436c-9dba-8452958f101d" containerName="mariadb-account-create" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.227920 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b7e625-a3dd-436c-9dba-8452958f101d" containerName="mariadb-account-create" Sep 30 17:21:06 crc kubenswrapper[4821]: E0930 17:21:06.227942 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-httpd" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.227948 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-httpd" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.228140 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b7e625-a3dd-436c-9dba-8452958f101d" containerName="mariadb-account-create" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.228150 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-httpd" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.228167 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" containerName="glance-log" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.228969 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.232544 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.232694 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.251415 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313785 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313832 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-logs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313864 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-config-data\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313889 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7z2p\" (UniqueName: \"kubernetes.io/projected/96ca9317-b33b-423b-9d59-3c9e9719c941-kube-api-access-h7z2p\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313916 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313949 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.313991 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-scripts\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.314019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.314105 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7vjj\" (UniqueName: \"kubernetes.io/projected/b7b7e625-a3dd-436c-9dba-8452958f101d-kube-api-access-n7vjj\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.415811 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416172 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416258 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-logs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416351 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-config-data\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416438 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7z2p\" (UniqueName: \"kubernetes.io/projected/96ca9317-b33b-423b-9d59-3c9e9719c941-kube-api-access-h7z2p\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416541 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416868 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.417298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-scripts\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416819 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416769 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-logs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.416785 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96ca9317-b33b-423b-9d59-3c9e9719c941-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.419879 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.424407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-config-data\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.427675 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-scripts\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.427914 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ca9317-b33b-423b-9d59-3c9e9719c941-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.441858 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7z2p\" (UniqueName: \"kubernetes.io/projected/96ca9317-b33b-423b-9d59-3c9e9719c941-kube-api-access-h7z2p\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.452148 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96ca9317-b33b-423b-9d59-3c9e9719c941\") " pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.570717 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.721806 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72758fd1-eda8-4729-b57c-8b357ada068a" path="/var/lib/kubelet/pods/72758fd1-eda8-4729-b57c-8b357ada068a/volumes" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.838839 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9057-account-create-5jkzz" Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.838842 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9057-account-create-5jkzz" event={"ID":"b7b7e625-a3dd-436c-9dba-8452958f101d","Type":"ContainerDied","Data":"49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3"} Sep 30 17:21:06 crc kubenswrapper[4821]: I0930 17:21:06.838885 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b65460c2aa97d8e3bac28dcc04e81c7a024a56a424ca987847686afc3b19d3" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.113067 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:21:07 crc kubenswrapper[4821]: W0930 17:21:07.120335 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ca9317_b33b_423b_9d59_3c9e9719c941.slice/crio-d0ae5d9a47d11acbe76066892a087c061fadc2a0e7c2dccc602d68b03fee45f5 WatchSource:0}: Error finding container d0ae5d9a47d11acbe76066892a087c061fadc2a0e7c2dccc602d68b03fee45f5: Status 404 returned error can't find the container with id d0ae5d9a47d11acbe76066892a087c061fadc2a0e7c2dccc602d68b03fee45f5 Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.848190 4821 generic.go:334] "Generic (PLEG): container finished" podID="0767a72c-1e7c-44a0-901c-21807a447600" containerID="43450ab0975b7cc23578e8fba6e9566b3e95ad2ee96642ae6c54420a7d21d974" exitCode=0 Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.848274 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerDied","Data":"43450ab0975b7cc23578e8fba6e9566b3e95ad2ee96642ae6c54420a7d21d974"} Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.851009 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96ca9317-b33b-423b-9d59-3c9e9719c941","Type":"ContainerStarted","Data":"83b38be11569cec6f91959e9cce6c4fcd2ae9abbd999d5828c6dc6423ba6a9dc"} Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.851060 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96ca9317-b33b-423b-9d59-3c9e9719c941","Type":"ContainerStarted","Data":"d0ae5d9a47d11acbe76066892a087c061fadc2a0e7c2dccc602d68b03fee45f5"} Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.882387 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946045 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946147 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946225 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946301 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946345 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946367 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946403 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946434 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data\") pod \"0767a72c-1e7c-44a0-901c-21807a447600\" (UID: \"0767a72c-1e7c-44a0-901c-21807a447600\") " Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.946764 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.951290 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs" (OuterVolumeSpecName: "logs") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.951625 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.955707 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2" (OuterVolumeSpecName: "kube-api-access-x2hr2") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "kube-api-access-x2hr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.957824 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts" (OuterVolumeSpecName: "scripts") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:07 crc kubenswrapper[4821]: I0930 17:21:07.984422 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.002609 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.012334 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data" (OuterVolumeSpecName: "config-data") pod "0767a72c-1e7c-44a0-901c-21807a447600" (UID: "0767a72c-1e7c-44a0-901c-21807a447600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048076 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2hr2\" (UniqueName: \"kubernetes.io/projected/0767a72c-1e7c-44a0-901c-21807a447600-kube-api-access-x2hr2\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048147 4821 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048159 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048168 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048177 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0767a72c-1e7c-44a0-901c-21807a447600-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048186 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048216 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.048228 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0767a72c-1e7c-44a0-901c-21807a447600-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.065676 4821 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.150138 4821 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.859453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0767a72c-1e7c-44a0-901c-21807a447600","Type":"ContainerDied","Data":"ba7b80c17bfa38a6f1c9e306c5fcc0e64764060ed8e6c795505b0cf8e8ec967c"} Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.859779 4821 scope.go:117] "RemoveContainer" containerID="43450ab0975b7cc23578e8fba6e9566b3e95ad2ee96642ae6c54420a7d21d974" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.859498 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.864001 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96ca9317-b33b-423b-9d59-3c9e9719c941","Type":"ContainerStarted","Data":"25292d53630f3c171e97fb21f2b7165cf39fd1f7fc26d1cf8240ff2b48ca9667"} Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.884438 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.896944 4821 scope.go:117] "RemoveContainer" containerID="9f2848a58dc3ddee4ee6d82cd5cbdd897e52e9c7bd7ed2b0eb58088af14898ae" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.899922 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.913344 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:08 crc kubenswrapper[4821]: E0930 17:21:08.913733 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-log" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.913756 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-log" Sep 30 17:21:08 crc kubenswrapper[4821]: E0930 17:21:08.913773 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-httpd" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.913780 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-httpd" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.913938 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-httpd" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.913971 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0767a72c-1e7c-44a0-901c-21807a447600" containerName="glance-log" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.915535 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.918677 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.918870 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.929919 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.929897192 podStartE2EDuration="2.929897192s" podCreationTimestamp="2025-09-30 17:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:08.917575695 +0000 UTC m=+1064.822621639" watchObservedRunningTime="2025-09-30 17:21:08.929897192 +0000 UTC m=+1064.834943126" Sep 30 17:21:08 crc kubenswrapper[4821]: I0930 17:21:08.942670 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065406 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-logs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065462 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-scripts\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065498 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065521 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065541 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-config-data\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065593 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065614 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqpv\" (UniqueName: \"kubernetes.io/projected/54601c55-069d-4baa-891a-e359cf501642-kube-api-access-mvqpv\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.065655 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.108977 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b974b45dd-mbzvm" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.167428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.167671 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqpv\" (UniqueName: \"kubernetes.io/projected/54601c55-069d-4baa-891a-e359cf501642-kube-api-access-mvqpv\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168024 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-logs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168268 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-scripts\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168353 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168448 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168553 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-config-data\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-logs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.168217 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54601c55-069d-4baa-891a-e359cf501642-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.169125 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.173389 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-config-data\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.174835 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.174872 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-scripts\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.191158 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqpv\" (UniqueName: \"kubernetes.io/projected/54601c55-069d-4baa-891a-e359cf501642-kube-api-access-mvqpv\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.194239 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54601c55-069d-4baa-891a-e359cf501642-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.208496 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"54601c55-069d-4baa-891a-e359cf501642\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.234669 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.568687 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:21:09 crc kubenswrapper[4821]: I0930 17:21:09.873013 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"54601c55-069d-4baa-891a-e359cf501642","Type":"ContainerStarted","Data":"dc7983a2afe9fe56fb586581386a062010a6cdc133eaa6b2ab3fd94939880e23"} Sep 30 17:21:10 crc kubenswrapper[4821]: I0930 17:21:10.718742 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0767a72c-1e7c-44a0-901c-21807a447600" path="/var/lib/kubelet/pods/0767a72c-1e7c-44a0-901c-21807a447600/volumes" Sep 30 17:21:10 crc kubenswrapper[4821]: I0930 17:21:10.883965 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"54601c55-069d-4baa-891a-e359cf501642","Type":"ContainerStarted","Data":"667ba8fe7eb05da8b14f2478ff46bafd2f18ac0db9e17e4a9b6ea882b77846c6"} Sep 30 17:21:10 crc kubenswrapper[4821]: I0930 17:21:10.884338 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"54601c55-069d-4baa-891a-e359cf501642","Type":"ContainerStarted","Data":"c0e78835a73076fd9807e85cadf697be955695262c0b859958059c436691e5d5"} Sep 30 17:21:10 crc kubenswrapper[4821]: I0930 17:21:10.904896 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.90487837 podStartE2EDuration="2.90487837s" podCreationTimestamp="2025-09-30 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:10.899407724 +0000 UTC m=+1066.804453678" watchObservedRunningTime="2025-09-30 17:21:10.90487837 +0000 UTC m=+1066.809924314" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.244838 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0881-account-create-s2482"] Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.246152 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.254334 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0881-account-create-s2482"] Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.254899 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.336919 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xjv\" (UniqueName: \"kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv\") pod \"nova-cell0-0881-account-create-s2482\" (UID: \"a2911be1-4bd5-4cbc-82f9-c08836c4d50a\") " pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.432860 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-27c1-account-create-z6qzm"] Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.433879 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.435986 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.438124 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xjv\" (UniqueName: \"kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv\") pod \"nova-cell0-0881-account-create-s2482\" (UID: \"a2911be1-4bd5-4cbc-82f9-c08836c4d50a\") " pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.441203 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-27c1-account-create-z6qzm"] Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.462491 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xjv\" (UniqueName: \"kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv\") pod \"nova-cell0-0881-account-create-s2482\" (UID: \"a2911be1-4bd5-4cbc-82f9-c08836c4d50a\") " pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.539959 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf2l\" (UniqueName: \"kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l\") pod \"nova-cell1-27c1-account-create-z6qzm\" (UID: \"84c218e4-5e8b-4a1b-83ff-6bde410b1bab\") " pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.561868 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.641916 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf2l\" (UniqueName: \"kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l\") pod \"nova-cell1-27c1-account-create-z6qzm\" (UID: \"84c218e4-5e8b-4a1b-83ff-6bde410b1bab\") " pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.660242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf2l\" (UniqueName: \"kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l\") pod \"nova-cell1-27c1-account-create-z6qzm\" (UID: \"84c218e4-5e8b-4a1b-83ff-6bde410b1bab\") " pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.753100 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:13 crc kubenswrapper[4821]: I0930 17:21:13.972619 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0881-account-create-s2482"] Sep 30 17:21:13 crc kubenswrapper[4821]: W0930 17:21:13.985322 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2911be1_4bd5_4cbc_82f9_c08836c4d50a.slice/crio-3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58 WatchSource:0}: Error finding container 3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58: Status 404 returned error can't find the container with id 3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58 Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.343463 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-27c1-account-create-z6qzm"] Sep 30 17:21:14 crc kubenswrapper[4821]: W0930 17:21:14.347662 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c218e4_5e8b_4a1b_83ff_6bde410b1bab.slice/crio-1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317 WatchSource:0}: Error finding container 1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317: Status 404 returned error can't find the container with id 1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317 Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.956697 4821 generic.go:334] "Generic (PLEG): container finished" podID="a2911be1-4bd5-4cbc-82f9-c08836c4d50a" containerID="bf02100b3ea37cd5bbafadfc3b22788f79578fa4d776407fc1d00e8fc5e0ecc6" exitCode=0 Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.956763 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0881-account-create-s2482" event={"ID":"a2911be1-4bd5-4cbc-82f9-c08836c4d50a","Type":"ContainerDied","Data":"bf02100b3ea37cd5bbafadfc3b22788f79578fa4d776407fc1d00e8fc5e0ecc6"} Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.956789 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0881-account-create-s2482" event={"ID":"a2911be1-4bd5-4cbc-82f9-c08836c4d50a","Type":"ContainerStarted","Data":"3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58"} Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.959487 4821 generic.go:334] "Generic (PLEG): container finished" podID="84c218e4-5e8b-4a1b-83ff-6bde410b1bab" containerID="0923cc06fe202faa8dc5933a7426f2ddabfc52f5924962ebcd43a634ed3e35eb" exitCode=0 Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.959548 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27c1-account-create-z6qzm" event={"ID":"84c218e4-5e8b-4a1b-83ff-6bde410b1bab","Type":"ContainerDied","Data":"0923cc06fe202faa8dc5933a7426f2ddabfc52f5924962ebcd43a634ed3e35eb"} Sep 30 17:21:14 crc kubenswrapper[4821]: I0930 17:21:14.959586 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27c1-account-create-z6qzm" event={"ID":"84c218e4-5e8b-4a1b-83ff-6bde410b1bab","Type":"ContainerStarted","Data":"1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317"} Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.324844 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.727054 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865128 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865518 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865570 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84kp6\" (UniqueName: \"kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865606 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865641 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865705 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.865730 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs\") pod \"abd2779c-c7a7-4d42-8e83-7cbec573d595\" (UID: \"abd2779c-c7a7-4d42-8e83-7cbec573d595\") " Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.866723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs" (OuterVolumeSpecName: "logs") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.874356 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.885622 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6" (OuterVolumeSpecName: "kube-api-access-84kp6") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "kube-api-access-84kp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.889028 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data" (OuterVolumeSpecName: "config-data") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.893591 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts" (OuterVolumeSpecName: "scripts") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.907611 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.933945 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "abd2779c-c7a7-4d42-8e83-7cbec573d595" (UID: "abd2779c-c7a7-4d42-8e83-7cbec573d595"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.967935 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd2779c-c7a7-4d42-8e83-7cbec573d595-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968061 4821 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968072 4821 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968103 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84kp6\" (UniqueName: \"kubernetes.io/projected/abd2779c-c7a7-4d42-8e83-7cbec573d595-kube-api-access-84kp6\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968115 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd2779c-c7a7-4d42-8e83-7cbec573d595-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968123 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.968131 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd2779c-c7a7-4d42-8e83-7cbec573d595-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.973668 4821 generic.go:334] "Generic (PLEG): container finished" podID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerID="3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341" exitCode=137 Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.973682 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerDied","Data":"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341"} Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.973728 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b974b45dd-mbzvm" event={"ID":"abd2779c-c7a7-4d42-8e83-7cbec573d595","Type":"ContainerDied","Data":"1c6fc6187908386fb2830478c1c239f6c9373f459da346efb7bdb7089970b7b2"} Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.973751 4821 scope.go:117] "RemoveContainer" containerID="8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0" Sep 30 17:21:15 crc kubenswrapper[4821]: I0930 17:21:15.973690 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b974b45dd-mbzvm" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.014367 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.021876 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b974b45dd-mbzvm"] Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.156379 4821 scope.go:117] "RemoveContainer" containerID="3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.196509 4821 scope.go:117] "RemoveContainer" containerID="8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0" Sep 30 17:21:16 crc kubenswrapper[4821]: E0930 17:21:16.198575 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0\": container with ID starting with 8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0 not found: ID does not exist" containerID="8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.198616 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0"} err="failed to get container status \"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0\": rpc error: code = NotFound desc = could not find container \"8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0\": container with ID starting with 8b9d25a4612b66e90a5bf40aa674b5cc3cc6d13a971523e910f6f524e850cfd0 not found: ID does not exist" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.198636 4821 scope.go:117] "RemoveContainer" containerID="3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341" Sep 30 17:21:16 crc kubenswrapper[4821]: E0930 17:21:16.199370 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341\": container with ID starting with 3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341 not found: ID does not exist" containerID="3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.199388 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341"} err="failed to get container status \"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341\": rpc error: code = NotFound desc = could not find container \"3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341\": container with ID starting with 3f2e2c5b1ca73cd1df218b04af23793901759c941bb7c049b256c6415feea341 not found: ID does not exist" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.230525 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.286454 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.373984 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkf2l\" (UniqueName: \"kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l\") pod \"84c218e4-5e8b-4a1b-83ff-6bde410b1bab\" (UID: \"84c218e4-5e8b-4a1b-83ff-6bde410b1bab\") " Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.377410 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l" (OuterVolumeSpecName: "kube-api-access-hkf2l") pod "84c218e4-5e8b-4a1b-83ff-6bde410b1bab" (UID: "84c218e4-5e8b-4a1b-83ff-6bde410b1bab"). InnerVolumeSpecName "kube-api-access-hkf2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.475697 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xjv\" (UniqueName: \"kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv\") pod \"a2911be1-4bd5-4cbc-82f9-c08836c4d50a\" (UID: \"a2911be1-4bd5-4cbc-82f9-c08836c4d50a\") " Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.476066 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkf2l\" (UniqueName: \"kubernetes.io/projected/84c218e4-5e8b-4a1b-83ff-6bde410b1bab-kube-api-access-hkf2l\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.478374 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv" (OuterVolumeSpecName: "kube-api-access-v7xjv") pod "a2911be1-4bd5-4cbc-82f9-c08836c4d50a" (UID: "a2911be1-4bd5-4cbc-82f9-c08836c4d50a"). InnerVolumeSpecName "kube-api-access-v7xjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.571791 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.571999 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.577955 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7xjv\" (UniqueName: \"kubernetes.io/projected/a2911be1-4bd5-4cbc-82f9-c08836c4d50a-kube-api-access-v7xjv\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.606238 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.619440 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.716360 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" path="/var/lib/kubelet/pods/abd2779c-c7a7-4d42-8e83-7cbec573d595/volumes" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.985979 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-27c1-account-create-z6qzm" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.985981 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-27c1-account-create-z6qzm" event={"ID":"84c218e4-5e8b-4a1b-83ff-6bde410b1bab","Type":"ContainerDied","Data":"1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317"} Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.986106 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea42ad0b6acefb5796e9a831619e20c9b4179a9e06b4c3a81fa0448f2a6c317" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.988057 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0881-account-create-s2482" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.988105 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0881-account-create-s2482" event={"ID":"a2911be1-4bd5-4cbc-82f9-c08836c4d50a","Type":"ContainerDied","Data":"3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58"} Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.988296 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bfc6d2a06041211509f2c90c17898dbe7cee2a286cc149ef08caa6eda5aeb58" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.988967 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:21:16 crc kubenswrapper[4821]: I0930 17:21:16.989000 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489169 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6rmp"] Sep 30 17:21:18 crc kubenswrapper[4821]: E0930 17:21:18.489516 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2911be1-4bd5-4cbc-82f9-c08836c4d50a" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489531 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2911be1-4bd5-4cbc-82f9-c08836c4d50a" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: E0930 17:21:18.489542 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c218e4-5e8b-4a1b-83ff-6bde410b1bab" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489548 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c218e4-5e8b-4a1b-83ff-6bde410b1bab" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: E0930 17:21:18.489586 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon-log" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489593 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon-log" Sep 30 17:21:18 crc kubenswrapper[4821]: E0930 17:21:18.489614 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489619 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489795 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489804 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd2779c-c7a7-4d42-8e83-7cbec573d595" containerName="horizon-log" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489815 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c218e4-5e8b-4a1b-83ff-6bde410b1bab" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.489823 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2911be1-4bd5-4cbc-82f9-c08836c4d50a" containerName="mariadb-account-create" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.490354 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.492987 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.493245 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rvsn4" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.496268 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.516096 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6rmp"] Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.615388 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.615727 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.615805 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmmjn\" (UniqueName: \"kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.615837 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.716864 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.716920 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmmjn\" (UniqueName: \"kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.716947 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.716994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.723107 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.725643 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.730627 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.737578 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmmjn\" (UniqueName: \"kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn\") pod \"nova-cell0-conductor-db-sync-j6rmp\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:18 crc kubenswrapper[4821]: I0930 17:21:18.815452 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.059076 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6rmp"] Sep 30 17:21:19 crc kubenswrapper[4821]: W0930 17:21:19.063699 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd6dfc_6145_4325_b100_ace6b130ad73.slice/crio-ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d WatchSource:0}: Error finding container ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d: Status 404 returned error can't find the container with id ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.229735 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.229822 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.235209 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.235299 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.273884 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.277155 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:19 crc kubenswrapper[4821]: I0930 17:21:19.289241 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:20 crc kubenswrapper[4821]: I0930 17:21:20.014777 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" event={"ID":"b8cd6dfc-6145-4325-b100-ace6b130ad73","Type":"ContainerStarted","Data":"ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d"} Sep 30 17:21:20 crc kubenswrapper[4821]: I0930 17:21:20.015650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:20 crc kubenswrapper[4821]: I0930 17:21:20.015765 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:21 crc kubenswrapper[4821]: I0930 17:21:21.883602 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:21 crc kubenswrapper[4821]: I0930 17:21:21.894782 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:21:27 crc kubenswrapper[4821]: I0930 17:21:27.073048 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" event={"ID":"b8cd6dfc-6145-4325-b100-ace6b130ad73","Type":"ContainerStarted","Data":"9b7360409521062fbccdc80fac63fafb12394b9e4548a3cff92d40c80ca573ac"} Sep 30 17:21:27 crc kubenswrapper[4821]: I0930 17:21:27.085872 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" podStartSLOduration=1.474299501 podStartE2EDuration="9.085853589s" podCreationTimestamp="2025-09-30 17:21:18 +0000 UTC" firstStartedPulling="2025-09-30 17:21:19.065341496 +0000 UTC m=+1074.970387440" lastFinishedPulling="2025-09-30 17:21:26.676895584 +0000 UTC m=+1082.581941528" observedRunningTime="2025-09-30 17:21:27.085355157 +0000 UTC m=+1082.990401101" watchObservedRunningTime="2025-09-30 17:21:27.085853589 +0000 UTC m=+1082.990899533" Sep 30 17:21:37 crc kubenswrapper[4821]: I0930 17:21:37.156037 4821 generic.go:334] "Generic (PLEG): container finished" podID="b8cd6dfc-6145-4325-b100-ace6b130ad73" containerID="9b7360409521062fbccdc80fac63fafb12394b9e4548a3cff92d40c80ca573ac" exitCode=0 Sep 30 17:21:37 crc kubenswrapper[4821]: I0930 17:21:37.156121 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" event={"ID":"b8cd6dfc-6145-4325-b100-ace6b130ad73","Type":"ContainerDied","Data":"9b7360409521062fbccdc80fac63fafb12394b9e4548a3cff92d40c80ca573ac"} Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.513147 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.650428 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmmjn\" (UniqueName: \"kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn\") pod \"b8cd6dfc-6145-4325-b100-ace6b130ad73\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.650695 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle\") pod \"b8cd6dfc-6145-4325-b100-ace6b130ad73\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.650719 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data\") pod \"b8cd6dfc-6145-4325-b100-ace6b130ad73\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.650887 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts\") pod \"b8cd6dfc-6145-4325-b100-ace6b130ad73\" (UID: \"b8cd6dfc-6145-4325-b100-ace6b130ad73\") " Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.655541 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts" (OuterVolumeSpecName: "scripts") pod "b8cd6dfc-6145-4325-b100-ace6b130ad73" (UID: "b8cd6dfc-6145-4325-b100-ace6b130ad73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.658220 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn" (OuterVolumeSpecName: "kube-api-access-xmmjn") pod "b8cd6dfc-6145-4325-b100-ace6b130ad73" (UID: "b8cd6dfc-6145-4325-b100-ace6b130ad73"). InnerVolumeSpecName "kube-api-access-xmmjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.677358 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data" (OuterVolumeSpecName: "config-data") pod "b8cd6dfc-6145-4325-b100-ace6b130ad73" (UID: "b8cd6dfc-6145-4325-b100-ace6b130ad73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.677809 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8cd6dfc-6145-4325-b100-ace6b130ad73" (UID: "b8cd6dfc-6145-4325-b100-ace6b130ad73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.752253 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.752288 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmmjn\" (UniqueName: \"kubernetes.io/projected/b8cd6dfc-6145-4325-b100-ace6b130ad73-kube-api-access-xmmjn\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.752299 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:38 crc kubenswrapper[4821]: I0930 17:21:38.752308 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd6dfc-6145-4325-b100-ace6b130ad73-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.173661 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" event={"ID":"b8cd6dfc-6145-4325-b100-ace6b130ad73","Type":"ContainerDied","Data":"ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d"} Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.173906 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7552cc7b7a6bd86e183be6ae6b33bdc66ab069177b07f13d8e35613c69705d" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.173739 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6rmp" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.269624 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:21:39 crc kubenswrapper[4821]: E0930 17:21:39.270301 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd6dfc-6145-4325-b100-ace6b130ad73" containerName="nova-cell0-conductor-db-sync" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.270330 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd6dfc-6145-4325-b100-ace6b130ad73" containerName="nova-cell0-conductor-db-sync" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.270499 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd6dfc-6145-4325-b100-ace6b130ad73" containerName="nova-cell0-conductor-db-sync" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.271193 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.274737 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rvsn4" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.274952 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.280747 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.361138 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.361188 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.361209 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5nb\" (UniqueName: \"kubernetes.io/projected/54929d4b-fffc-44ff-b7fd-046e8e86334f-kube-api-access-vz5nb\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.463473 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.463780 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.464067 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5nb\" (UniqueName: \"kubernetes.io/projected/54929d4b-fffc-44ff-b7fd-046e8e86334f-kube-api-access-vz5nb\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.473945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.477977 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54929d4b-fffc-44ff-b7fd-046e8e86334f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.481674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5nb\" (UniqueName: \"kubernetes.io/projected/54929d4b-fffc-44ff-b7fd-046e8e86334f-kube-api-access-vz5nb\") pod \"nova-cell0-conductor-0\" (UID: \"54929d4b-fffc-44ff-b7fd-046e8e86334f\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:39 crc kubenswrapper[4821]: I0930 17:21:39.624264 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:40 crc kubenswrapper[4821]: I0930 17:21:40.052672 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:21:40 crc kubenswrapper[4821]: W0930 17:21:40.056306 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54929d4b_fffc_44ff_b7fd_046e8e86334f.slice/crio-0fd0a1b228c006425e155c20365e002e4e8df243dc5c2c2afced4cecbd23bf69 WatchSource:0}: Error finding container 0fd0a1b228c006425e155c20365e002e4e8df243dc5c2c2afced4cecbd23bf69: Status 404 returned error can't find the container with id 0fd0a1b228c006425e155c20365e002e4e8df243dc5c2c2afced4cecbd23bf69 Sep 30 17:21:40 crc kubenswrapper[4821]: I0930 17:21:40.184224 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"54929d4b-fffc-44ff-b7fd-046e8e86334f","Type":"ContainerStarted","Data":"0fd0a1b228c006425e155c20365e002e4e8df243dc5c2c2afced4cecbd23bf69"} Sep 30 17:21:41 crc kubenswrapper[4821]: I0930 17:21:41.193929 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"54929d4b-fffc-44ff-b7fd-046e8e86334f","Type":"ContainerStarted","Data":"96e3d950a908cb975119e366416fb663f36567b24e3d7a159bd89896baf754f1"} Sep 30 17:21:41 crc kubenswrapper[4821]: I0930 17:21:41.194312 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:41 crc kubenswrapper[4821]: I0930 17:21:41.216640 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.21661799 podStartE2EDuration="2.21661799s" podCreationTimestamp="2025-09-30 17:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:41.206810056 +0000 UTC m=+1097.111856010" watchObservedRunningTime="2025-09-30 17:21:41.21661799 +0000 UTC m=+1097.121663934" Sep 30 17:21:49 crc kubenswrapper[4821]: I0930 17:21:49.349700 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:21:49 crc kubenswrapper[4821]: I0930 17:21:49.350192 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:21:49 crc kubenswrapper[4821]: I0930 17:21:49.657528 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.096553 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-676jk"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.097757 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.102469 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.107623 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.125146 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-676jk"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.243363 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn72f\" (UniqueName: \"kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.243456 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.243513 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.243534 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.284706 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.286000 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.292364 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.299709 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.347076 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.347162 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.347193 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.347248 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72f\" (UniqueName: \"kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.358750 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.368584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.380516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.424092 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn72f\" (UniqueName: \"kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f\") pod \"nova-cell0-cell-mapping-676jk\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.437940 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.439742 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.451929 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.452676 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.452752 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.452811 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdg4x\" (UniqueName: \"kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.452884 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.471182 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.507137 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.508454 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.544698 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.548205 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555145 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdg4x\" (UniqueName: \"kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555242 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555281 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555306 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555348 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555381 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.555407 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrvf\" (UniqueName: \"kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.561539 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.567564 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.584215 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.585688 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.585718 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.596219 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.628161 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.632359 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.654054 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656550 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656600 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656624 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656678 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656702 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpv6\" (UniqueName: \"kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656723 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.656743 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrvf\" (UniqueName: \"kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.657295 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.660830 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdg4x\" (UniqueName: \"kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x\") pod \"nova-api-0\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.661227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.674880 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.700241 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.704528 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrvf\" (UniqueName: \"kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf\") pod \"nova-metadata-0\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.723980 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758707 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758766 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758799 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmpv6\" (UniqueName: \"kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758823 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758856 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758885 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758953 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.758979 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9pj8\" (UniqueName: \"kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.759026 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.759104 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.759166 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.765259 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.768164 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.776791 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmpv6\" (UniqueName: \"kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6\") pod \"nova-cell1-novncproxy-0\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.818191 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863895 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863918 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863937 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.863982 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.864037 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.864056 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9pj8\" (UniqueName: \"kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.867843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.868992 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.871061 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.876153 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.884013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.884414 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.888102 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9pj8\" (UniqueName: \"kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8\") pod \"nova-scheduler-0\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " pod="openstack/nova-scheduler-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.889828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7\") pod \"dnsmasq-dns-745f868dcf-lrft6\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.923458 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:21:50 crc kubenswrapper[4821]: I0930 17:21:50.948357 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.012898 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.050458 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.184879 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.270175 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-676jk"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.291963 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-676jk" event={"ID":"824f4509-b658-47ca-90b7-3725a3839996","Type":"ContainerStarted","Data":"3a5c0b41fdbd7dbb5a588dbb84bd424df460cf94d7cea2c2d12452a34d08aeb1"} Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.310999 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerStarted","Data":"372b3452d655603828ba2bba4d451403100eb0a4671b20f5e22be3f2bba03a9f"} Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.544966 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:21:51 crc kubenswrapper[4821]: W0930 17:21:51.550652 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aaeda51_9c0c_4311_902d_82e8c6b38762.slice/crio-700ea73201d92f8d3e2b9a04f846d05161e42fcf9eb112f5b021002b53fc8b5a WatchSource:0}: Error finding container 700ea73201d92f8d3e2b9a04f846d05161e42fcf9eb112f5b021002b53fc8b5a: Status 404 returned error can't find the container with id 700ea73201d92f8d3e2b9a04f846d05161e42fcf9eb112f5b021002b53fc8b5a Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.573715 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zx2qv"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.578235 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.588336 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.588635 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.588919 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.608511 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zx2qv"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.683945 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.684004 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.684070 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcx7\" (UniqueName: \"kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.684233 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.785406 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.785574 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.786969 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.787569 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcx7\" (UniqueName: \"kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.790485 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.791098 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.791264 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.807342 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcx7\" (UniqueName: \"kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7\") pod \"nova-cell1-conductor-db-sync-zx2qv\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.875634 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.883871 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:21:51 crc kubenswrapper[4821]: W0930 17:21:51.888358 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20b5bc75_9e66_4f70_b97f_eaa1c74a8c9d.slice/crio-940c1705f211d2b38ae0ffa76040c61a2cc27f682532f7d0bb2cd32a957edafb WatchSource:0}: Error finding container 940c1705f211d2b38ae0ffa76040c61a2cc27f682532f7d0bb2cd32a957edafb: Status 404 returned error can't find the container with id 940c1705f211d2b38ae0ffa76040c61a2cc27f682532f7d0bb2cd32a957edafb Sep 30 17:21:51 crc kubenswrapper[4821]: I0930 17:21:51.980097 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.337191 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-676jk" event={"ID":"824f4509-b658-47ca-90b7-3725a3839996","Type":"ContainerStarted","Data":"242ec62677dbf6170eec9e02269478a326440a22aa2a359db5e6bb5006472051"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.338761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerStarted","Data":"700ea73201d92f8d3e2b9a04f846d05161e42fcf9eb112f5b021002b53fc8b5a"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.339983 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d","Type":"ContainerStarted","Data":"940c1705f211d2b38ae0ffa76040c61a2cc27f682532f7d0bb2cd32a957edafb"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.354118 4821 generic.go:334] "Generic (PLEG): container finished" podID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerID="464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9" exitCode=0 Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.354242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" event={"ID":"5d492225-4a98-4973-b1e6-e24b17dfd0a3","Type":"ContainerDied","Data":"464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.354269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" event={"ID":"5d492225-4a98-4973-b1e6-e24b17dfd0a3","Type":"ContainerStarted","Data":"dc126d012dd048ee23db7eae04725db10dda32c6163a7ffd7e97280ecb52d939"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.356488 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78b7897c-466d-437f-946b-e3f2a220c119","Type":"ContainerStarted","Data":"1d9eba8b4c6051448494cf82012129cb3cddef1059327a9f7bb86308c9c3c0d0"} Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.361605 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-676jk" podStartSLOduration=2.3615897439999998 podStartE2EDuration="2.361589744s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:52.359739299 +0000 UTC m=+1108.264785243" watchObservedRunningTime="2025-09-30 17:21:52.361589744 +0000 UTC m=+1108.266635688" Sep 30 17:21:52 crc kubenswrapper[4821]: I0930 17:21:52.493711 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zx2qv"] Sep 30 17:21:53 crc kubenswrapper[4821]: I0930 17:21:53.375007 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" event={"ID":"5d492225-4a98-4973-b1e6-e24b17dfd0a3","Type":"ContainerStarted","Data":"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444"} Sep 30 17:21:53 crc kubenswrapper[4821]: I0930 17:21:53.377947 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" event={"ID":"fc0a8889-b876-4532-9c20-c5a0ecae9dd4","Type":"ContainerStarted","Data":"236471828c46e013602f99e03dbcaa478a1b95fd5fc02c719efb4b3e22d0e06f"} Sep 30 17:21:53 crc kubenswrapper[4821]: I0930 17:21:53.377984 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" event={"ID":"fc0a8889-b876-4532-9c20-c5a0ecae9dd4","Type":"ContainerStarted","Data":"2ae63b0143aa0e75b0126f58aa457178aa976e7ec96a43e54ae649e4eb198355"} Sep 30 17:21:53 crc kubenswrapper[4821]: I0930 17:21:53.400970 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" podStartSLOduration=3.400946859 podStartE2EDuration="3.400946859s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:53.391227508 +0000 UTC m=+1109.296273442" watchObservedRunningTime="2025-09-30 17:21:53.400946859 +0000 UTC m=+1109.305992833" Sep 30 17:21:53 crc kubenswrapper[4821]: I0930 17:21:53.419574 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" podStartSLOduration=2.419555242 podStartE2EDuration="2.419555242s" podCreationTimestamp="2025-09-30 17:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:21:53.404765754 +0000 UTC m=+1109.309811698" watchObservedRunningTime="2025-09-30 17:21:53.419555242 +0000 UTC m=+1109.324601186" Sep 30 17:21:54 crc kubenswrapper[4821]: I0930 17:21:54.354559 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:21:54 crc kubenswrapper[4821]: I0930 17:21:54.371163 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:21:54 crc kubenswrapper[4821]: I0930 17:21:54.395777 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.412402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78b7897c-466d-437f-946b-e3f2a220c119","Type":"ContainerStarted","Data":"fd6979c1b177dd7d1e22ab200d53f9c9fb743efab1bb49ad7ad3e4b7ba40cdea"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.412512 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="78b7897c-466d-437f-946b-e3f2a220c119" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fd6979c1b177dd7d1e22ab200d53f9c9fb743efab1bb49ad7ad3e4b7ba40cdea" gracePeriod=30 Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.415797 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerStarted","Data":"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.415848 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerStarted","Data":"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.419196 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerStarted","Data":"2d8710e1d95c53c166e1db38ac962a46bb01e0bb698e53b16c68efa2a13f31fe"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.419245 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerStarted","Data":"21c7b8250327f18ecfcc8bd6c47dbdbbd2113f969268ce4c1a9fe79f9be228d0"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.419363 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-log" containerID="cri-o://21c7b8250327f18ecfcc8bd6c47dbdbbd2113f969268ce4c1a9fe79f9be228d0" gracePeriod=30 Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.419613 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-metadata" containerID="cri-o://2d8710e1d95c53c166e1db38ac962a46bb01e0bb698e53b16c68efa2a13f31fe" gracePeriod=30 Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.425296 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d","Type":"ContainerStarted","Data":"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9"} Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.434738 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.466876693 podStartE2EDuration="6.434720336s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="2025-09-30 17:21:51.597802993 +0000 UTC m=+1107.502848937" lastFinishedPulling="2025-09-30 17:21:55.565646636 +0000 UTC m=+1111.470692580" observedRunningTime="2025-09-30 17:21:56.43243185 +0000 UTC m=+1112.337477794" watchObservedRunningTime="2025-09-30 17:21:56.434720336 +0000 UTC m=+1112.339766280" Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.463623 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.825791256 podStartE2EDuration="6.463604024s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="2025-09-30 17:21:51.889909645 +0000 UTC m=+1107.794955589" lastFinishedPulling="2025-09-30 17:21:55.527722413 +0000 UTC m=+1111.432768357" observedRunningTime="2025-09-30 17:21:56.460646101 +0000 UTC m=+1112.365692045" watchObservedRunningTime="2025-09-30 17:21:56.463604024 +0000 UTC m=+1112.368649968" Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.497553 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.465220872 podStartE2EDuration="6.497536408s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="2025-09-30 17:21:51.552215669 +0000 UTC m=+1107.457261613" lastFinishedPulling="2025-09-30 17:21:55.584531205 +0000 UTC m=+1111.489577149" observedRunningTime="2025-09-30 17:21:56.487082138 +0000 UTC m=+1112.392128082" watchObservedRunningTime="2025-09-30 17:21:56.497536408 +0000 UTC m=+1112.402582342" Sep 30 17:21:56 crc kubenswrapper[4821]: I0930 17:21:56.510122 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.162470294 podStartE2EDuration="6.51008404s" podCreationTimestamp="2025-09-30 17:21:50 +0000 UTC" firstStartedPulling="2025-09-30 17:21:51.217828714 +0000 UTC m=+1107.122874648" lastFinishedPulling="2025-09-30 17:21:55.56544245 +0000 UTC m=+1111.470488394" observedRunningTime="2025-09-30 17:21:56.503701372 +0000 UTC m=+1112.408747316" watchObservedRunningTime="2025-09-30 17:21:56.51008404 +0000 UTC m=+1112.415129984" Sep 30 17:21:57 crc kubenswrapper[4821]: I0930 17:21:57.434551 4821 generic.go:334] "Generic (PLEG): container finished" podID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerID="21c7b8250327f18ecfcc8bd6c47dbdbbd2113f969268ce4c1a9fe79f9be228d0" exitCode=143 Sep 30 17:21:57 crc kubenswrapper[4821]: I0930 17:21:57.434604 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerDied","Data":"21c7b8250327f18ecfcc8bd6c47dbdbbd2113f969268ce4c1a9fe79f9be228d0"} Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.460049 4821 generic.go:334] "Generic (PLEG): container finished" podID="fc0a8889-b876-4532-9c20-c5a0ecae9dd4" containerID="236471828c46e013602f99e03dbcaa478a1b95fd5fc02c719efb4b3e22d0e06f" exitCode=0 Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.460129 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" event={"ID":"fc0a8889-b876-4532-9c20-c5a0ecae9dd4","Type":"ContainerDied","Data":"236471828c46e013602f99e03dbcaa478a1b95fd5fc02c719efb4b3e22d0e06f"} Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.462644 4821 generic.go:334] "Generic (PLEG): container finished" podID="824f4509-b658-47ca-90b7-3725a3839996" containerID="242ec62677dbf6170eec9e02269478a326440a22aa2a359db5e6bb5006472051" exitCode=0 Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.462704 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-676jk" event={"ID":"824f4509-b658-47ca-90b7-3725a3839996","Type":"ContainerDied","Data":"242ec62677dbf6170eec9e02269478a326440a22aa2a359db5e6bb5006472051"} Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.819070 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.819146 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.924741 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.924783 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:00 crc kubenswrapper[4821]: I0930 17:22:00.949849 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.014270 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.014321 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.039039 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.055270 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.114952 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.115567 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="dnsmasq-dns" containerID="cri-o://f4496f07efe2e8406a06d98600d8f1f5ef294c6a272c36ee713b4899caae9b35" gracePeriod=10 Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.495181 4821 generic.go:334] "Generic (PLEG): container finished" podID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerID="f4496f07efe2e8406a06d98600d8f1f5ef294c6a272c36ee713b4899caae9b35" exitCode=0 Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.495393 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" event={"ID":"a45b832b-22cd-47fa-bd7f-83ad23d2d135","Type":"ContainerDied","Data":"f4496f07efe2e8406a06d98600d8f1f5ef294c6a272c36ee713b4899caae9b35"} Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.564293 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.746805 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.910131 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb\") pod \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.910239 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc\") pod \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.916221 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config\") pod \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.916255 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fv66\" (UniqueName: \"kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66\") pod \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.916337 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb\") pod \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\" (UID: \"a45b832b-22cd-47fa-bd7f-83ad23d2d135\") " Sep 30 17:22:01 crc kubenswrapper[4821]: I0930 17:22:01.928314 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66" (OuterVolumeSpecName: "kube-api-access-6fv66") pod "a45b832b-22cd-47fa-bd7f-83ad23d2d135" (UID: "a45b832b-22cd-47fa-bd7f-83ad23d2d135"). InnerVolumeSpecName "kube-api-access-6fv66". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.008679 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.008691 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.019447 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fv66\" (UniqueName: \"kubernetes.io/projected/a45b832b-22cd-47fa-bd7f-83ad23d2d135-kube-api-access-6fv66\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.031912 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a45b832b-22cd-47fa-bd7f-83ad23d2d135" (UID: "a45b832b-22cd-47fa-bd7f-83ad23d2d135"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.037638 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a45b832b-22cd-47fa-bd7f-83ad23d2d135" (UID: "a45b832b-22cd-47fa-bd7f-83ad23d2d135"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.037944 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a45b832b-22cd-47fa-bd7f-83ad23d2d135" (UID: "a45b832b-22cd-47fa-bd7f-83ad23d2d135"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.051319 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config" (OuterVolumeSpecName: "config") pod "a45b832b-22cd-47fa-bd7f-83ad23d2d135" (UID: "a45b832b-22cd-47fa-bd7f-83ad23d2d135"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.063795 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.078373 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.121136 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.121166 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.121178 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.121186 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a45b832b-22cd-47fa-bd7f-83ad23d2d135-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.222504 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts\") pod \"824f4509-b658-47ca-90b7-3725a3839996\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.222731 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcx7\" (UniqueName: \"kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7\") pod \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.222836 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data\") pod \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.222953 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts\") pod \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.223027 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn72f\" (UniqueName: \"kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f\") pod \"824f4509-b658-47ca-90b7-3725a3839996\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.223203 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data\") pod \"824f4509-b658-47ca-90b7-3725a3839996\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.223297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle\") pod \"824f4509-b658-47ca-90b7-3725a3839996\" (UID: \"824f4509-b658-47ca-90b7-3725a3839996\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.223376 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle\") pod \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\" (UID: \"fc0a8889-b876-4532-9c20-c5a0ecae9dd4\") " Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.231442 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7" (OuterVolumeSpecName: "kube-api-access-bqcx7") pod "fc0a8889-b876-4532-9c20-c5a0ecae9dd4" (UID: "fc0a8889-b876-4532-9c20-c5a0ecae9dd4"). InnerVolumeSpecName "kube-api-access-bqcx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.241177 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts" (OuterVolumeSpecName: "scripts") pod "824f4509-b658-47ca-90b7-3725a3839996" (UID: "824f4509-b658-47ca-90b7-3725a3839996"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.264520 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts" (OuterVolumeSpecName: "scripts") pod "fc0a8889-b876-4532-9c20-c5a0ecae9dd4" (UID: "fc0a8889-b876-4532-9c20-c5a0ecae9dd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.264684 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f" (OuterVolumeSpecName: "kube-api-access-cn72f") pod "824f4509-b658-47ca-90b7-3725a3839996" (UID: "824f4509-b658-47ca-90b7-3725a3839996"). InnerVolumeSpecName "kube-api-access-cn72f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.270225 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "824f4509-b658-47ca-90b7-3725a3839996" (UID: "824f4509-b658-47ca-90b7-3725a3839996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.270305 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc0a8889-b876-4532-9c20-c5a0ecae9dd4" (UID: "fc0a8889-b876-4532-9c20-c5a0ecae9dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.272230 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data" (OuterVolumeSpecName: "config-data") pod "fc0a8889-b876-4532-9c20-c5a0ecae9dd4" (UID: "fc0a8889-b876-4532-9c20-c5a0ecae9dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.273159 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data" (OuterVolumeSpecName: "config-data") pod "824f4509-b658-47ca-90b7-3725a3839996" (UID: "824f4509-b658-47ca-90b7-3725a3839996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.325944 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.325986 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcx7\" (UniqueName: \"kubernetes.io/projected/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-kube-api-access-bqcx7\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326002 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326014 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326026 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn72f\" (UniqueName: \"kubernetes.io/projected/824f4509-b658-47ca-90b7-3725a3839996-kube-api-access-cn72f\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326037 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326048 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/824f4509-b658-47ca-90b7-3725a3839996-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.326059 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc0a8889-b876-4532-9c20-c5a0ecae9dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.506248 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" event={"ID":"a45b832b-22cd-47fa-bd7f-83ad23d2d135","Type":"ContainerDied","Data":"469eb22c07799a879c99efc2e62b9f2abd74f7cfdd1db82aa2777c21a4f44b55"} Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.506299 4821 scope.go:117] "RemoveContainer" containerID="f4496f07efe2e8406a06d98600d8f1f5ef294c6a272c36ee713b4899caae9b35" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.506325 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-ckqhg" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.508784 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-676jk" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.509093 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-676jk" event={"ID":"824f4509-b658-47ca-90b7-3725a3839996","Type":"ContainerDied","Data":"3a5c0b41fdbd7dbb5a588dbb84bd424df460cf94d7cea2c2d12452a34d08aeb1"} Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.509154 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5c0b41fdbd7dbb5a588dbb84bd424df460cf94d7cea2c2d12452a34d08aeb1" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.510860 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.510920 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zx2qv" event={"ID":"fc0a8889-b876-4532-9c20-c5a0ecae9dd4","Type":"ContainerDied","Data":"2ae63b0143aa0e75b0126f58aa457178aa976e7ec96a43e54ae649e4eb198355"} Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.510946 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae63b0143aa0e75b0126f58aa457178aa976e7ec96a43e54ae649e4eb198355" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.539045 4821 scope.go:117] "RemoveContainer" containerID="7c8adb45471709c246f83939b757fbd276c1ceecc352a8177d6284b63b156583" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.593408 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:22:02 crc kubenswrapper[4821]: E0930 17:22:02.594192 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824f4509-b658-47ca-90b7-3725a3839996" containerName="nova-manage" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594219 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="824f4509-b658-47ca-90b7-3725a3839996" containerName="nova-manage" Sep 30 17:22:02 crc kubenswrapper[4821]: E0930 17:22:02.594267 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0a8889-b876-4532-9c20-c5a0ecae9dd4" containerName="nova-cell1-conductor-db-sync" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594280 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0a8889-b876-4532-9c20-c5a0ecae9dd4" containerName="nova-cell1-conductor-db-sync" Sep 30 17:22:02 crc kubenswrapper[4821]: E0930 17:22:02.594316 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="init" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594327 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="init" Sep 30 17:22:02 crc kubenswrapper[4821]: E0930 17:22:02.594371 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="dnsmasq-dns" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594380 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="dnsmasq-dns" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594690 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc0a8889-b876-4532-9c20-c5a0ecae9dd4" containerName="nova-cell1-conductor-db-sync" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594720 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="824f4509-b658-47ca-90b7-3725a3839996" containerName="nova-manage" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.594735 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" containerName="dnsmasq-dns" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.595937 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.599986 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.601332 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.608853 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-ckqhg"] Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.628665 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.719380 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45b832b-22cd-47fa-bd7f-83ad23d2d135" path="/var/lib/kubelet/pods/a45b832b-22cd-47fa-bd7f-83ad23d2d135/volumes" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.726931 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.727207 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-log" containerID="cri-o://c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac" gracePeriod=30 Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.727364 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-api" containerID="cri-o://6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299" gracePeriod=30 Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.731572 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7dl\" (UniqueName: \"kubernetes.io/projected/1d8fbb09-93be-43d9-82dd-2de6db113d0c-kube-api-access-2b7dl\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.731680 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.731753 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.738663 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.833478 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7dl\" (UniqueName: \"kubernetes.io/projected/1d8fbb09-93be-43d9-82dd-2de6db113d0c-kube-api-access-2b7dl\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.833578 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.833619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.837626 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.849282 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8fbb09-93be-43d9-82dd-2de6db113d0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.852028 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7dl\" (UniqueName: \"kubernetes.io/projected/1d8fbb09-93be-43d9-82dd-2de6db113d0c-kube-api-access-2b7dl\") pod \"nova-cell1-conductor-0\" (UID: \"1d8fbb09-93be-43d9-82dd-2de6db113d0c\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:02 crc kubenswrapper[4821]: I0930 17:22:02.924614 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:03 crc kubenswrapper[4821]: I0930 17:22:03.398841 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:22:03 crc kubenswrapper[4821]: I0930 17:22:03.528222 4821 generic.go:334] "Generic (PLEG): container finished" podID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerID="c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac" exitCode=143 Sep 30 17:22:03 crc kubenswrapper[4821]: I0930 17:22:03.528282 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerDied","Data":"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac"} Sep 30 17:22:03 crc kubenswrapper[4821]: I0930 17:22:03.530243 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerName="nova-scheduler-scheduler" containerID="cri-o://363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" gracePeriod=30 Sep 30 17:22:03 crc kubenswrapper[4821]: I0930 17:22:03.530332 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1d8fbb09-93be-43d9-82dd-2de6db113d0c","Type":"ContainerStarted","Data":"dced37ea88f359d27db6d13c30325f9dfc2aa85e20868d0a4d81be5d62ba0192"} Sep 30 17:22:04 crc kubenswrapper[4821]: I0930 17:22:04.539546 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1d8fbb09-93be-43d9-82dd-2de6db113d0c","Type":"ContainerStarted","Data":"7676f026b34528e818ee8631ec56cb02cb6af6238db382d91372b916cef2beb3"} Sep 30 17:22:04 crc kubenswrapper[4821]: I0930 17:22:04.542642 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:04 crc kubenswrapper[4821]: I0930 17:22:04.566600 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5665752299999998 podStartE2EDuration="2.56657523s" podCreationTimestamp="2025-09-30 17:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:04.564350555 +0000 UTC m=+1120.469396519" watchObservedRunningTime="2025-09-30 17:22:04.56657523 +0000 UTC m=+1120.471621174" Sep 30 17:22:06 crc kubenswrapper[4821]: E0930 17:22:06.017044 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:06 crc kubenswrapper[4821]: E0930 17:22:06.021250 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:06 crc kubenswrapper[4821]: E0930 17:22:06.022802 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:06 crc kubenswrapper[4821]: E0930 17:22:06.022861 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerName="nova-scheduler-scheduler" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.233069 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.268490 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9pj8\" (UniqueName: \"kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8\") pod \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.268650 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle\") pod \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.268835 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data\") pod \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\" (UID: \"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.297623 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8" (OuterVolumeSpecName: "kube-api-access-g9pj8") pod "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" (UID: "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d"). InnerVolumeSpecName "kube-api-access-g9pj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.302254 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data" (OuterVolumeSpecName: "config-data") pod "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" (UID: "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.330054 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" (UID: "20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.371457 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9pj8\" (UniqueName: \"kubernetes.io/projected/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-kube-api-access-g9pj8\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.371517 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.371529 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.579348 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.591236 4821 generic.go:334] "Generic (PLEG): container finished" podID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" exitCode=0 Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.591321 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d","Type":"ContainerDied","Data":"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9"} Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.591336 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.591355 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d","Type":"ContainerDied","Data":"940c1705f211d2b38ae0ffa76040c61a2cc27f682532f7d0bb2cd32a957edafb"} Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.591376 4821 scope.go:117] "RemoveContainer" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.607964 4821 generic.go:334] "Generic (PLEG): container finished" podID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerID="6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299" exitCode=0 Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.608003 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerDied","Data":"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299"} Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.608026 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3aaeda51-9c0c-4311-902d-82e8c6b38762","Type":"ContainerDied","Data":"700ea73201d92f8d3e2b9a04f846d05161e42fcf9eb112f5b021002b53fc8b5a"} Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.608127 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.642488 4821 scope.go:117] "RemoveContainer" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.642998 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9\": container with ID starting with 363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9 not found: ID does not exist" containerID="363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.643047 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9"} err="failed to get container status \"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9\": rpc error: code = NotFound desc = could not find container \"363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9\": container with ID starting with 363c090a396c8c6ceb06e64e66a2d425d73c10e51d7ffa002264c1a0e597a9e9 not found: ID does not exist" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.643097 4821 scope.go:117] "RemoveContainer" containerID="6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.658777 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.662216 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.675414 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.675885 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-api" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.675907 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-api" Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.675941 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-log" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.675949 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-log" Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.675971 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerName="nova-scheduler-scheduler" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.675992 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerName="nova-scheduler-scheduler" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676210 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdg4x\" (UniqueName: \"kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x\") pod \"3aaeda51-9c0c-4311-902d-82e8c6b38762\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676240 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" containerName="nova-scheduler-scheduler" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676266 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-log" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676284 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" containerName="nova-api-api" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676657 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs" (OuterVolumeSpecName: "logs") pod "3aaeda51-9c0c-4311-902d-82e8c6b38762" (UID: "3aaeda51-9c0c-4311-902d-82e8c6b38762"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.677143 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.676265 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs\") pod \"3aaeda51-9c0c-4311-902d-82e8c6b38762\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.677773 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle\") pod \"3aaeda51-9c0c-4311-902d-82e8c6b38762\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.677914 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data\") pod \"3aaeda51-9c0c-4311-902d-82e8c6b38762\" (UID: \"3aaeda51-9c0c-4311-902d-82e8c6b38762\") " Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.678506 4821 scope.go:117] "RemoveContainer" containerID="c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.678632 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aaeda51-9c0c-4311-902d-82e8c6b38762-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.679646 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.679942 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x" (OuterVolumeSpecName: "kube-api-access-bdg4x") pod "3aaeda51-9c0c-4311-902d-82e8c6b38762" (UID: "3aaeda51-9c0c-4311-902d-82e8c6b38762"). InnerVolumeSpecName "kube-api-access-bdg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.683998 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.704478 4821 scope.go:117] "RemoveContainer" containerID="6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299" Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.704900 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299\": container with ID starting with 6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299 not found: ID does not exist" containerID="6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.704935 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299"} err="failed to get container status \"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299\": rpc error: code = NotFound desc = could not find container \"6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299\": container with ID starting with 6616d11b19cfc66c7b344dded05b4099e527de2da2f9c7de7cc44ef42f409299 not found: ID does not exist" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.704962 4821 scope.go:117] "RemoveContainer" containerID="c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac" Sep 30 17:22:08 crc kubenswrapper[4821]: E0930 17:22:08.705306 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac\": container with ID starting with c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac not found: ID does not exist" containerID="c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.705449 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac"} err="failed to get container status \"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac\": rpc error: code = NotFound desc = could not find container \"c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac\": container with ID starting with c6c2ff5f464b58d82eee5f8c6fdaf5fedd20fe6b1ae4c9d7dd1f43306f7cbdac not found: ID does not exist" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.715217 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aaeda51-9c0c-4311-902d-82e8c6b38762" (UID: "3aaeda51-9c0c-4311-902d-82e8c6b38762"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.717849 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data" (OuterVolumeSpecName: "config-data") pod "3aaeda51-9c0c-4311-902d-82e8c6b38762" (UID: "3aaeda51-9c0c-4311-902d-82e8c6b38762"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.718112 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d" path="/var/lib/kubelet/pods/20b5bc75-9e66-4f70-b97f-eaa1c74a8c9d/volumes" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.779985 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.780354 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtfz\" (UniqueName: \"kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.780425 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.780554 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.780575 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdg4x\" (UniqueName: \"kubernetes.io/projected/3aaeda51-9c0c-4311-902d-82e8c6b38762-kube-api-access-bdg4x\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.780590 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaeda51-9c0c-4311-902d-82e8c6b38762-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.881825 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.881955 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtfz\" (UniqueName: \"kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.881977 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.885917 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.885985 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.900198 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtfz\" (UniqueName: \"kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz\") pod \"nova-scheduler-0\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:08 crc kubenswrapper[4821]: I0930 17:22:08.983880 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:08.999003 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.000216 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.005769 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.008430 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.015459 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.032917 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.085319 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.085617 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.085724 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.085916 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.188864 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.188941 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.188973 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.189047 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.189598 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.204328 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.204809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.211591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d\") pod \"nova-api-0\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.335342 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.484098 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.621627 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b825d54c-6a7f-4f40-a9e1-1cd385220b3c","Type":"ContainerStarted","Data":"4155062c64229344039de9dce1fee5a621c5fd8bd58c2ae25bd4888eeefad4cb"} Sep 30 17:22:09 crc kubenswrapper[4821]: I0930 17:22:09.789523 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.632667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b825d54c-6a7f-4f40-a9e1-1cd385220b3c","Type":"ContainerStarted","Data":"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f"} Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.635331 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerStarted","Data":"426345a0357e10bbd8aafb0ba4ea8f283414e6e493b7f1a6e57173442c3ce763"} Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.635377 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerStarted","Data":"a0a18ee60ba01dad5852b30dcb42ea41fcf8ff3b8688dbd5e2ebae595732e1ed"} Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.635407 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerStarted","Data":"6413c82834c6e5f04d4c93bfc0d9fc0792dab4c886327c25aad23dc1d03a5aa5"} Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.652716 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6526990660000003 podStartE2EDuration="2.652699066s" podCreationTimestamp="2025-09-30 17:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:10.649011264 +0000 UTC m=+1126.554057218" watchObservedRunningTime="2025-09-30 17:22:10.652699066 +0000 UTC m=+1126.557745010" Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.663868 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.663849843 podStartE2EDuration="2.663849843s" podCreationTimestamp="2025-09-30 17:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:10.662966341 +0000 UTC m=+1126.568012285" watchObservedRunningTime="2025-09-30 17:22:10.663849843 +0000 UTC m=+1126.568895807" Sep 30 17:22:10 crc kubenswrapper[4821]: I0930 17:22:10.717109 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aaeda51-9c0c-4311-902d-82e8c6b38762" path="/var/lib/kubelet/pods/3aaeda51-9c0c-4311-902d-82e8c6b38762/volumes" Sep 30 17:22:12 crc kubenswrapper[4821]: I0930 17:22:12.948635 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Sep 30 17:22:14 crc kubenswrapper[4821]: I0930 17:22:14.000630 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.001318 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.026990 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.336014 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.336454 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.349341 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.349412 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:22:19 crc kubenswrapper[4821]: I0930 17:22:19.746046 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:22:20 crc kubenswrapper[4821]: I0930 17:22:20.418252 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:20 crc kubenswrapper[4821]: I0930 17:22:20.418252 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.775205 4821 generic.go:334] "Generic (PLEG): container finished" podID="78b7897c-466d-437f-946b-e3f2a220c119" containerID="fd6979c1b177dd7d1e22ab200d53f9c9fb743efab1bb49ad7ad3e4b7ba40cdea" exitCode=137 Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.775659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78b7897c-466d-437f-946b-e3f2a220c119","Type":"ContainerDied","Data":"fd6979c1b177dd7d1e22ab200d53f9c9fb743efab1bb49ad7ad3e4b7ba40cdea"} Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.775740 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78b7897c-466d-437f-946b-e3f2a220c119","Type":"ContainerDied","Data":"1d9eba8b4c6051448494cf82012129cb3cddef1059327a9f7bb86308c9c3c0d0"} Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.775750 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9eba8b4c6051448494cf82012129cb3cddef1059327a9f7bb86308c9c3c0d0" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.777523 4821 generic.go:334] "Generic (PLEG): container finished" podID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerID="2d8710e1d95c53c166e1db38ac962a46bb01e0bb698e53b16c68efa2a13f31fe" exitCode=137 Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.777547 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerDied","Data":"2d8710e1d95c53c166e1db38ac962a46bb01e0bb698e53b16c68efa2a13f31fe"} Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.870724 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.879093 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.988629 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs\") pod \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.988851 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle\") pod \"78b7897c-466d-437f-946b-e3f2a220c119\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.988890 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle\") pod \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.988945 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data\") pod \"78b7897c-466d-437f-946b-e3f2a220c119\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.988995 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngrvf\" (UniqueName: \"kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf\") pod \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.989026 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data\") pod \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\" (UID: \"78cbac58-13eb-43ae-b814-24b65ea2e9d1\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.989037 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs" (OuterVolumeSpecName: "logs") pod "78cbac58-13eb-43ae-b814-24b65ea2e9d1" (UID: "78cbac58-13eb-43ae-b814-24b65ea2e9d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.989056 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmpv6\" (UniqueName: \"kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6\") pod \"78b7897c-466d-437f-946b-e3f2a220c119\" (UID: \"78b7897c-466d-437f-946b-e3f2a220c119\") " Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.990222 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78cbac58-13eb-43ae-b814-24b65ea2e9d1-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.994228 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6" (OuterVolumeSpecName: "kube-api-access-bmpv6") pod "78b7897c-466d-437f-946b-e3f2a220c119" (UID: "78b7897c-466d-437f-946b-e3f2a220c119"). InnerVolumeSpecName "kube-api-access-bmpv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:26 crc kubenswrapper[4821]: I0930 17:22:26.994331 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf" (OuterVolumeSpecName: "kube-api-access-ngrvf") pod "78cbac58-13eb-43ae-b814-24b65ea2e9d1" (UID: "78cbac58-13eb-43ae-b814-24b65ea2e9d1"). InnerVolumeSpecName "kube-api-access-ngrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.017060 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78cbac58-13eb-43ae-b814-24b65ea2e9d1" (UID: "78cbac58-13eb-43ae-b814-24b65ea2e9d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.017416 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78b7897c-466d-437f-946b-e3f2a220c119" (UID: "78b7897c-466d-437f-946b-e3f2a220c119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.017846 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data" (OuterVolumeSpecName: "config-data") pod "78cbac58-13eb-43ae-b814-24b65ea2e9d1" (UID: "78cbac58-13eb-43ae-b814-24b65ea2e9d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.029769 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data" (OuterVolumeSpecName: "config-data") pod "78b7897c-466d-437f-946b-e3f2a220c119" (UID: "78b7897c-466d-437f-946b-e3f2a220c119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092503 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092533 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngrvf\" (UniqueName: \"kubernetes.io/projected/78cbac58-13eb-43ae-b814-24b65ea2e9d1-kube-api-access-ngrvf\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092544 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092553 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmpv6\" (UniqueName: \"kubernetes.io/projected/78b7897c-466d-437f-946b-e3f2a220c119-kube-api-access-bmpv6\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092561 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b7897c-466d-437f-946b-e3f2a220c119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.092570 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cbac58-13eb-43ae-b814-24b65ea2e9d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.788225 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78cbac58-13eb-43ae-b814-24b65ea2e9d1","Type":"ContainerDied","Data":"372b3452d655603828ba2bba4d451403100eb0a4671b20f5e22be3f2bba03a9f"} Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.788614 4821 scope.go:117] "RemoveContainer" containerID="2d8710e1d95c53c166e1db38ac962a46bb01e0bb698e53b16c68efa2a13f31fe" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.788292 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.788245 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.815132 4821 scope.go:117] "RemoveContainer" containerID="21c7b8250327f18ecfcc8bd6c47dbdbbd2113f969268ce4c1a9fe79f9be228d0" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.842262 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.889675 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.908921 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.926335 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.938857 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: E0930 17:22:27.939425 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-log" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.939534 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-log" Sep 30 17:22:27 crc kubenswrapper[4821]: E0930 17:22:27.939619 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b7897c-466d-437f-946b-e3f2a220c119" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.939678 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b7897c-466d-437f-946b-e3f2a220c119" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:22:27 crc kubenswrapper[4821]: E0930 17:22:27.939796 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-metadata" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.939851 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-metadata" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.940053 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b7897c-466d-437f-946b-e3f2a220c119" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.940201 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-metadata" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.940297 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" containerName="nova-metadata-log" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.941033 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.943236 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.943921 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.946332 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.947532 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.950600 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.952798 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.953123 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.954912 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:22:27 crc kubenswrapper[4821]: I0930 17:22:27.971032 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113199 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113249 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113314 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113377 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113417 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113476 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113537 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g428m\" (UniqueName: \"kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113558 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7xj\" (UniqueName: \"kubernetes.io/projected/f7619b96-2d9a-4684-b08e-8e784c41e984-kube-api-access-gf7xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.113736 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.214909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.214961 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215026 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215078 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g428m\" (UniqueName: \"kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215109 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215173 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7xj\" (UniqueName: \"kubernetes.io/projected/f7619b96-2d9a-4684-b08e-8e784c41e984-kube-api-access-gf7xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215249 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215276 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.215307 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.216641 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.221326 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.222128 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.222258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.226455 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7619b96-2d9a-4684-b08e-8e784c41e984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.227701 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.228737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.233358 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.234317 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7xj\" (UniqueName: \"kubernetes.io/projected/f7619b96-2d9a-4684-b08e-8e784c41e984-kube-api-access-gf7xj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7619b96-2d9a-4684-b08e-8e784c41e984\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.234750 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g428m\" (UniqueName: \"kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m\") pod \"nova-metadata-0\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.273614 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.280835 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.718316 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b7897c-466d-437f-946b-e3f2a220c119" path="/var/lib/kubelet/pods/78b7897c-466d-437f-946b-e3f2a220c119/volumes" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.719369 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cbac58-13eb-43ae-b814-24b65ea2e9d1" path="/var/lib/kubelet/pods/78cbac58-13eb-43ae-b814-24b65ea2e9d1/volumes" Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.729826 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:22:28 crc kubenswrapper[4821]: W0930 17:22:28.731980 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7619b96_2d9a_4684_b08e_8e784c41e984.slice/crio-a2f6f0d277d07990b9c1f86a55e8721b4144dd6b713d95fa7028eeed5a63c156 WatchSource:0}: Error finding container a2f6f0d277d07990b9c1f86a55e8721b4144dd6b713d95fa7028eeed5a63c156: Status 404 returned error can't find the container with id a2f6f0d277d07990b9c1f86a55e8721b4144dd6b713d95fa7028eeed5a63c156 Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.784932 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:28 crc kubenswrapper[4821]: W0930 17:22:28.791322 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6715a782_71c0_4471_971c_0d7f7fada4a1.slice/crio-e77afbd5733f41a20ce588fd1669ac2b231f42c0d6c95caf01d09afa42648556 WatchSource:0}: Error finding container e77afbd5733f41a20ce588fd1669ac2b231f42c0d6c95caf01d09afa42648556: Status 404 returned error can't find the container with id e77afbd5733f41a20ce588fd1669ac2b231f42c0d6c95caf01d09afa42648556 Sep 30 17:22:28 crc kubenswrapper[4821]: I0930 17:22:28.799033 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7619b96-2d9a-4684-b08e-8e784c41e984","Type":"ContainerStarted","Data":"a2f6f0d277d07990b9c1f86a55e8721b4144dd6b713d95fa7028eeed5a63c156"} Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.344037 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.344531 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.345053 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.345336 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.354947 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.359831 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.626325 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-778d8bb9d7-5pq8p"] Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.628176 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.644152 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d8bb9d7-5pq8p"] Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.655251 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-dns-svc\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.655513 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-config\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.655624 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-nb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.655746 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkwg\" (UniqueName: \"kubernetes.io/projected/fa3aca63-7477-4ea3-87f8-1d5ed010443c-kube-api-access-7hkwg\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.656364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-sb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.758039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-dns-svc\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.758124 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-config\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.758171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-nb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.758215 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hkwg\" (UniqueName: \"kubernetes.io/projected/fa3aca63-7477-4ea3-87f8-1d5ed010443c-kube-api-access-7hkwg\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.758234 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-sb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.759083 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-sb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.760397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-dns-svc\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.766917 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-ovsdbserver-nb\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.773152 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa3aca63-7477-4ea3-87f8-1d5ed010443c-config\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.819812 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hkwg\" (UniqueName: \"kubernetes.io/projected/fa3aca63-7477-4ea3-87f8-1d5ed010443c-kube-api-access-7hkwg\") pod \"dnsmasq-dns-778d8bb9d7-5pq8p\" (UID: \"fa3aca63-7477-4ea3-87f8-1d5ed010443c\") " pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.882768 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerStarted","Data":"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d"} Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.882813 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerStarted","Data":"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce"} Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.882823 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerStarted","Data":"e77afbd5733f41a20ce588fd1669ac2b231f42c0d6c95caf01d09afa42648556"} Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.922711 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.922692948 podStartE2EDuration="2.922692948s" podCreationTimestamp="2025-09-30 17:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:29.919295373 +0000 UTC m=+1145.824341317" watchObservedRunningTime="2025-09-30 17:22:29.922692948 +0000 UTC m=+1145.827738892" Sep 30 17:22:29 crc kubenswrapper[4821]: I0930 17:22:29.923772 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7619b96-2d9a-4684-b08e-8e784c41e984","Type":"ContainerStarted","Data":"eba952761d08f7687c4eefe8a34aca32929bb5a16f7911a482d3ce317e3bd986"} Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.010549 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.509036 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.509017098 podStartE2EDuration="3.509017098s" podCreationTimestamp="2025-09-30 17:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:29.961945224 +0000 UTC m=+1145.866991188" watchObservedRunningTime="2025-09-30 17:22:30.509017098 +0000 UTC m=+1146.414063042" Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.519740 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d8bb9d7-5pq8p"] Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.936681 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa3aca63-7477-4ea3-87f8-1d5ed010443c" containerID="68db437dbcc57f6cd9d26c99baec86438ab1820e0cf65580bb83a1184c243347" exitCode=0 Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.936741 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" event={"ID":"fa3aca63-7477-4ea3-87f8-1d5ed010443c","Type":"ContainerDied","Data":"68db437dbcc57f6cd9d26c99baec86438ab1820e0cf65580bb83a1184c243347"} Sep 30 17:22:30 crc kubenswrapper[4821]: I0930 17:22:30.936789 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" event={"ID":"fa3aca63-7477-4ea3-87f8-1d5ed010443c","Type":"ContainerStarted","Data":"ad5f1fe373b2e2b29a2a4a17e1d92d7fea7392899d8ba25197cb2cd9154fbf5d"} Sep 30 17:22:31 crc kubenswrapper[4821]: I0930 17:22:31.947471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" event={"ID":"fa3aca63-7477-4ea3-87f8-1d5ed010443c","Type":"ContainerStarted","Data":"992681f6fbd20cb94d075722cc1c76752b2879e62cf291d445fca6a34460605d"} Sep 30 17:22:31 crc kubenswrapper[4821]: I0930 17:22:31.947860 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:31 crc kubenswrapper[4821]: I0930 17:22:31.970212 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" podStartSLOduration=2.970195101 podStartE2EDuration="2.970195101s" podCreationTimestamp="2025-09-30 17:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:31.966370806 +0000 UTC m=+1147.871416750" watchObservedRunningTime="2025-09-30 17:22:31.970195101 +0000 UTC m=+1147.875241035" Sep 30 17:22:32 crc kubenswrapper[4821]: I0930 17:22:32.699918 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:32 crc kubenswrapper[4821]: I0930 17:22:32.700244 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-log" containerID="cri-o://a0a18ee60ba01dad5852b30dcb42ea41fcf8ff3b8688dbd5e2ebae595732e1ed" gracePeriod=30 Sep 30 17:22:32 crc kubenswrapper[4821]: I0930 17:22:32.700304 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-api" containerID="cri-o://426345a0357e10bbd8aafb0ba4ea8f283414e6e493b7f1a6e57173442c3ce763" gracePeriod=30 Sep 30 17:22:32 crc kubenswrapper[4821]: I0930 17:22:32.956943 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerID="a0a18ee60ba01dad5852b30dcb42ea41fcf8ff3b8688dbd5e2ebae595732e1ed" exitCode=143 Sep 30 17:22:32 crc kubenswrapper[4821]: I0930 17:22:32.957161 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerDied","Data":"a0a18ee60ba01dad5852b30dcb42ea41fcf8ff3b8688dbd5e2ebae595732e1ed"} Sep 30 17:22:33 crc kubenswrapper[4821]: I0930 17:22:33.274142 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:33 crc kubenswrapper[4821]: I0930 17:22:33.281666 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:33 crc kubenswrapper[4821]: I0930 17:22:33.281871 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:35 crc kubenswrapper[4821]: I0930 17:22:35.989751 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerID="426345a0357e10bbd8aafb0ba4ea8f283414e6e493b7f1a6e57173442c3ce763" exitCode=0 Sep 30 17:22:35 crc kubenswrapper[4821]: I0930 17:22:35.989936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerDied","Data":"426345a0357e10bbd8aafb0ba4ea8f283414e6e493b7f1a6e57173442c3ce763"} Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.330725 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.475214 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs\") pod \"d0aa01dc-304f-4cf5-b681-f4854024f85d\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.475336 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d\") pod \"d0aa01dc-304f-4cf5-b681-f4854024f85d\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.475368 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data\") pod \"d0aa01dc-304f-4cf5-b681-f4854024f85d\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.475438 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") pod \"d0aa01dc-304f-4cf5-b681-f4854024f85d\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.475706 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs" (OuterVolumeSpecName: "logs") pod "d0aa01dc-304f-4cf5-b681-f4854024f85d" (UID: "d0aa01dc-304f-4cf5-b681-f4854024f85d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.476134 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0aa01dc-304f-4cf5-b681-f4854024f85d-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.496721 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d" (OuterVolumeSpecName: "kube-api-access-nrn8d") pod "d0aa01dc-304f-4cf5-b681-f4854024f85d" (UID: "d0aa01dc-304f-4cf5-b681-f4854024f85d"). InnerVolumeSpecName "kube-api-access-nrn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:36 crc kubenswrapper[4821]: E0930 17:22:36.509631 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle podName:d0aa01dc-304f-4cf5-b681-f4854024f85d nodeName:}" failed. No retries permitted until 2025-09-30 17:22:37.009598846 +0000 UTC m=+1152.914644800 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle") pod "d0aa01dc-304f-4cf5-b681-f4854024f85d" (UID: "d0aa01dc-304f-4cf5-b681-f4854024f85d") : error deleting /var/lib/kubelet/pods/d0aa01dc-304f-4cf5-b681-f4854024f85d/volume-subpaths: remove /var/lib/kubelet/pods/d0aa01dc-304f-4cf5-b681-f4854024f85d/volume-subpaths: no such file or directory Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.519012 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data" (OuterVolumeSpecName: "config-data") pod "d0aa01dc-304f-4cf5-b681-f4854024f85d" (UID: "d0aa01dc-304f-4cf5-b681-f4854024f85d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.577512 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/d0aa01dc-304f-4cf5-b681-f4854024f85d-kube-api-access-nrn8d\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.577550 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.999214 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0aa01dc-304f-4cf5-b681-f4854024f85d","Type":"ContainerDied","Data":"6413c82834c6e5f04d4c93bfc0d9fc0792dab4c886327c25aad23dc1d03a5aa5"} Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.999265 4821 scope.go:117] "RemoveContainer" containerID="426345a0357e10bbd8aafb0ba4ea8f283414e6e493b7f1a6e57173442c3ce763" Sep 30 17:22:36 crc kubenswrapper[4821]: I0930 17:22:36.999375 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.016801 4821 scope.go:117] "RemoveContainer" containerID="a0a18ee60ba01dad5852b30dcb42ea41fcf8ff3b8688dbd5e2ebae595732e1ed" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.086394 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") pod \"d0aa01dc-304f-4cf5-b681-f4854024f85d\" (UID: \"d0aa01dc-304f-4cf5-b681-f4854024f85d\") " Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.091364 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0aa01dc-304f-4cf5-b681-f4854024f85d" (UID: "d0aa01dc-304f-4cf5-b681-f4854024f85d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.189011 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0aa01dc-304f-4cf5-b681-f4854024f85d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.359388 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.385198 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.397547 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:37 crc kubenswrapper[4821]: E0930 17:22:37.398266 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-api" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.398389 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-api" Sep 30 17:22:37 crc kubenswrapper[4821]: E0930 17:22:37.398501 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-log" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.398584 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-log" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.398882 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-log" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.399238 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" containerName="nova-api-api" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.402341 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.409817 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.413215 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.413518 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.413666 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492755 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kg2b\" (UniqueName: \"kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492807 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492828 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492901 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.492916 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594600 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594631 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594704 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.594817 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kg2b\" (UniqueName: \"kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.595452 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.599237 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.601626 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.603197 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.616823 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.617344 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kg2b\" (UniqueName: \"kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b\") pod \"nova-api-0\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " pod="openstack/nova-api-0" Sep 30 17:22:37 crc kubenswrapper[4821]: I0930 17:22:37.721825 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.206511 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.275503 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.281606 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.281638 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.300779 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:38 crc kubenswrapper[4821]: I0930 17:22:38.718787 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0aa01dc-304f-4cf5-b681-f4854024f85d" path="/var/lib/kubelet/pods/d0aa01dc-304f-4cf5-b681-f4854024f85d/volumes" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.035213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerStarted","Data":"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90"} Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.052943 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerStarted","Data":"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0"} Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.052977 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerStarted","Data":"3c86c3c5d09fcbc950675a3f50a6a18a116cbfe2d927da7d4ade3acf2c8d55b6"} Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.063826 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.092427 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.092404679 podStartE2EDuration="2.092404679s" podCreationTimestamp="2025-09-30 17:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:39.054531797 +0000 UTC m=+1154.959577741" watchObservedRunningTime="2025-09-30 17:22:39.092404679 +0000 UTC m=+1154.997450613" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.253746 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-c4v7j"] Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.255734 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.265144 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.265839 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.270961 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4v7j"] Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.294423 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.304570 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.331193 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.331463 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.331634 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.331743 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wc7\" (UniqueName: \"kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.433548 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.433616 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.433646 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wc7\" (UniqueName: \"kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.433740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.439736 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.440766 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.441512 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.457125 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wc7\" (UniqueName: \"kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7\") pod \"nova-cell1-cell-mapping-c4v7j\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:39 crc kubenswrapper[4821]: I0930 17:22:39.585584 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.012328 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-778d8bb9d7-5pq8p" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.080345 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.080551 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="dnsmasq-dns" containerID="cri-o://47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444" gracePeriod=10 Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.127418 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4v7j"] Sep 30 17:22:40 crc kubenswrapper[4821]: W0930 17:22:40.171633 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc003c616_ec69_450f_b2bd_0a9fb2d84cfa.slice/crio-d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a WatchSource:0}: Error finding container d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a: Status 404 returned error can't find the container with id d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.683864 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.758663 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7\") pod \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.758734 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb\") pod \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.758940 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb\") pod \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.758974 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config\") pod \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.759050 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc\") pod \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\" (UID: \"5d492225-4a98-4973-b1e6-e24b17dfd0a3\") " Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.783255 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7" (OuterVolumeSpecName: "kube-api-access-jfzx7") pod "5d492225-4a98-4973-b1e6-e24b17dfd0a3" (UID: "5d492225-4a98-4973-b1e6-e24b17dfd0a3"). InnerVolumeSpecName "kube-api-access-jfzx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.863026 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfzx7\" (UniqueName: \"kubernetes.io/projected/5d492225-4a98-4973-b1e6-e24b17dfd0a3-kube-api-access-jfzx7\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.898252 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d492225-4a98-4973-b1e6-e24b17dfd0a3" (UID: "5d492225-4a98-4973-b1e6-e24b17dfd0a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.911600 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config" (OuterVolumeSpecName: "config") pod "5d492225-4a98-4973-b1e6-e24b17dfd0a3" (UID: "5d492225-4a98-4973-b1e6-e24b17dfd0a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.952019 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d492225-4a98-4973-b1e6-e24b17dfd0a3" (UID: "5d492225-4a98-4973-b1e6-e24b17dfd0a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.964901 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.964938 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.964948 4821 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:40 crc kubenswrapper[4821]: I0930 17:22:40.968381 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d492225-4a98-4973-b1e6-e24b17dfd0a3" (UID: "5d492225-4a98-4973-b1e6-e24b17dfd0a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.052335 4821 generic.go:334] "Generic (PLEG): container finished" podID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerID="47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444" exitCode=0 Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.052391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" event={"ID":"5d492225-4a98-4973-b1e6-e24b17dfd0a3","Type":"ContainerDied","Data":"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444"} Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.052416 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" event={"ID":"5d492225-4a98-4973-b1e6-e24b17dfd0a3","Type":"ContainerDied","Data":"dc126d012dd048ee23db7eae04725db10dda32c6163a7ffd7e97280ecb52d939"} Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.052433 4821 scope.go:117] "RemoveContainer" containerID="47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.052531 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745f868dcf-lrft6" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.055956 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4v7j" event={"ID":"c003c616-ec69-450f-b2bd-0a9fb2d84cfa","Type":"ContainerStarted","Data":"3ddc37035b4eb39ee3825f171c45ee01ab6ca5795e70581995d89c601eb554b3"} Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.056004 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4v7j" event={"ID":"c003c616-ec69-450f-b2bd-0a9fb2d84cfa","Type":"ContainerStarted","Data":"d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a"} Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.066042 4821 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d492225-4a98-4973-b1e6-e24b17dfd0a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.074801 4821 scope.go:117] "RemoveContainer" containerID="464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.087379 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-c4v7j" podStartSLOduration=2.087363615 podStartE2EDuration="2.087363615s" podCreationTimestamp="2025-09-30 17:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:41.084007631 +0000 UTC m=+1156.989053575" watchObservedRunningTime="2025-09-30 17:22:41.087363615 +0000 UTC m=+1156.992409559" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.124050 4821 scope.go:117] "RemoveContainer" containerID="47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444" Sep 30 17:22:41 crc kubenswrapper[4821]: E0930 17:22:41.124491 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444\": container with ID starting with 47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444 not found: ID does not exist" containerID="47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.124523 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444"} err="failed to get container status \"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444\": rpc error: code = NotFound desc = could not find container \"47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444\": container with ID starting with 47783fae952d2de6f087ce315dd302d2af530d8373c74b121262e6187acd6444 not found: ID does not exist" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.124551 4821 scope.go:117] "RemoveContainer" containerID="464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9" Sep 30 17:22:41 crc kubenswrapper[4821]: E0930 17:22:41.126276 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9\": container with ID starting with 464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9 not found: ID does not exist" containerID="464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.126400 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9"} err="failed to get container status \"464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9\": rpc error: code = NotFound desc = could not find container \"464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9\": container with ID starting with 464342108fbb5c9dff37abd8403c306ba9cbb80d6f3fb6321b33f205e6bce4e9 not found: ID does not exist" Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.144925 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:22:41 crc kubenswrapper[4821]: I0930 17:22:41.152956 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745f868dcf-lrft6"] Sep 30 17:22:42 crc kubenswrapper[4821]: I0930 17:22:42.737334 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" path="/var/lib/kubelet/pods/5d492225-4a98-4973-b1e6-e24b17dfd0a3/volumes" Sep 30 17:22:46 crc kubenswrapper[4821]: I0930 17:22:46.095411 4821 generic.go:334] "Generic (PLEG): container finished" podID="c003c616-ec69-450f-b2bd-0a9fb2d84cfa" containerID="3ddc37035b4eb39ee3825f171c45ee01ab6ca5795e70581995d89c601eb554b3" exitCode=0 Sep 30 17:22:46 crc kubenswrapper[4821]: I0930 17:22:46.095541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4v7j" event={"ID":"c003c616-ec69-450f-b2bd-0a9fb2d84cfa","Type":"ContainerDied","Data":"3ddc37035b4eb39ee3825f171c45ee01ab6ca5795e70581995d89c601eb554b3"} Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.471471 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.602113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle\") pod \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.602484 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7wc7\" (UniqueName: \"kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7\") pod \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.603326 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data\") pod \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.603396 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts\") pod \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\" (UID: \"c003c616-ec69-450f-b2bd-0a9fb2d84cfa\") " Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.611376 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts" (OuterVolumeSpecName: "scripts") pod "c003c616-ec69-450f-b2bd-0a9fb2d84cfa" (UID: "c003c616-ec69-450f-b2bd-0a9fb2d84cfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.611514 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7" (OuterVolumeSpecName: "kube-api-access-j7wc7") pod "c003c616-ec69-450f-b2bd-0a9fb2d84cfa" (UID: "c003c616-ec69-450f-b2bd-0a9fb2d84cfa"). InnerVolumeSpecName "kube-api-access-j7wc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.635165 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data" (OuterVolumeSpecName: "config-data") pod "c003c616-ec69-450f-b2bd-0a9fb2d84cfa" (UID: "c003c616-ec69-450f-b2bd-0a9fb2d84cfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.655231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c003c616-ec69-450f-b2bd-0a9fb2d84cfa" (UID: "c003c616-ec69-450f-b2bd-0a9fb2d84cfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.705236 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.705405 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7wc7\" (UniqueName: \"kubernetes.io/projected/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-kube-api-access-j7wc7\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.705466 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.705519 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c003c616-ec69-450f-b2bd-0a9fb2d84cfa-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.722330 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:47 crc kubenswrapper[4821]: I0930 17:22:47.722383 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.113716 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4v7j" event={"ID":"c003c616-ec69-450f-b2bd-0a9fb2d84cfa","Type":"ContainerDied","Data":"d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a"} Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.113759 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c480ade6d7e3f2788fe22c663a0dadca0a3116ca3fdf01bf32570d3c47221a" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.113784 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4v7j" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.295041 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.295292 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-log" containerID="cri-o://392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0" gracePeriod=30 Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.295387 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-api" containerID="cri-o://7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90" gracePeriod=30 Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.301908 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.302165 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerName="nova-scheduler-scheduler" containerID="cri-o://e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" gracePeriod=30 Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.311275 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.321169 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.181:8774/\": EOF" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.321387 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.181:8774/\": EOF" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.327124 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.359176 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:22:48 crc kubenswrapper[4821]: I0930 17:22:48.360591 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:22:49 crc kubenswrapper[4821]: E0930 17:22:49.002909 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:49 crc kubenswrapper[4821]: E0930 17:22:49.004511 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:49 crc kubenswrapper[4821]: E0930 17:22:49.005797 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:22:49 crc kubenswrapper[4821]: E0930 17:22:49.005837 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerName="nova-scheduler-scheduler" Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.122930 4821 generic.go:334] "Generic (PLEG): container finished" podID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerID="392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0" exitCode=143 Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.123002 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerDied","Data":"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0"} Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.123161 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-log" containerID="cri-o://83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce" gracePeriod=30 Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.123254 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" containerID="cri-o://9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d" gracePeriod=30 Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.226842 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": EOF" Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.349930 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.349984 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.350020 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.350628 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:22:49 crc kubenswrapper[4821]: I0930 17:22:49.350688 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830" gracePeriod=600 Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.133778 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830" exitCode=0 Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.133871 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830"} Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.134100 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202"} Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.134122 4821 scope.go:117] "RemoveContainer" containerID="b5eaf5939fe5362fd182fcbe1679c246dcaf1dbb07c54b7f2bf7e11a0269f3a6" Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.144420 4821 generic.go:334] "Generic (PLEG): container finished" podID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerID="83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce" exitCode=143 Sep 30 17:22:50 crc kubenswrapper[4821]: I0930 17:22:50.144461 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerDied","Data":"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce"} Sep 30 17:22:52 crc kubenswrapper[4821]: I0930 17:22:52.886976 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:52 crc kubenswrapper[4821]: I0930 17:22:52.980350 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:52 crc kubenswrapper[4821]: I0930 17:22:52.995968 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xtfz\" (UniqueName: \"kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz\") pod \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " Sep 30 17:22:52 crc kubenswrapper[4821]: I0930 17:22:52.996058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data\") pod \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " Sep 30 17:22:52 crc kubenswrapper[4821]: I0930 17:22:52.996255 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle\") pod \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\" (UID: \"b825d54c-6a7f-4f40-a9e1-1cd385220b3c\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.008448 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz" (OuterVolumeSpecName: "kube-api-access-8xtfz") pod "b825d54c-6a7f-4f40-a9e1-1cd385220b3c" (UID: "b825d54c-6a7f-4f40-a9e1-1cd385220b3c"). InnerVolumeSpecName "kube-api-access-8xtfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.048162 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data" (OuterVolumeSpecName: "config-data") pod "b825d54c-6a7f-4f40-a9e1-1cd385220b3c" (UID: "b825d54c-6a7f-4f40-a9e1-1cd385220b3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.050843 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b825d54c-6a7f-4f40-a9e1-1cd385220b3c" (UID: "b825d54c-6a7f-4f40-a9e1-1cd385220b3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098020 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs\") pod \"6715a782-71c0-4471-971c-0d7f7fada4a1\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g428m\" (UniqueName: \"kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m\") pod \"6715a782-71c0-4471-971c-0d7f7fada4a1\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098167 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data\") pod \"6715a782-71c0-4471-971c-0d7f7fada4a1\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098211 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs\") pod \"6715a782-71c0-4471-971c-0d7f7fada4a1\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098231 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle\") pod \"6715a782-71c0-4471-971c-0d7f7fada4a1\" (UID: \"6715a782-71c0-4471-971c-0d7f7fada4a1\") " Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098581 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098600 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xtfz\" (UniqueName: \"kubernetes.io/projected/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-kube-api-access-8xtfz\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.098610 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b825d54c-6a7f-4f40-a9e1-1cd385220b3c-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.099275 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs" (OuterVolumeSpecName: "logs") pod "6715a782-71c0-4471-971c-0d7f7fada4a1" (UID: "6715a782-71c0-4471-971c-0d7f7fada4a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.101727 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m" (OuterVolumeSpecName: "kube-api-access-g428m") pod "6715a782-71c0-4471-971c-0d7f7fada4a1" (UID: "6715a782-71c0-4471-971c-0d7f7fada4a1"). InnerVolumeSpecName "kube-api-access-g428m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.125564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6715a782-71c0-4471-971c-0d7f7fada4a1" (UID: "6715a782-71c0-4471-971c-0d7f7fada4a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.127466 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data" (OuterVolumeSpecName: "config-data") pod "6715a782-71c0-4471-971c-0d7f7fada4a1" (UID: "6715a782-71c0-4471-971c-0d7f7fada4a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.153254 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6715a782-71c0-4471-971c-0d7f7fada4a1" (UID: "6715a782-71c0-4471-971c-0d7f7fada4a1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.171375 4821 generic.go:334] "Generic (PLEG): container finished" podID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerID="9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d" exitCode=0 Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.171440 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerDied","Data":"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d"} Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.171471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6715a782-71c0-4471-971c-0d7f7fada4a1","Type":"ContainerDied","Data":"e77afbd5733f41a20ce588fd1669ac2b231f42c0d6c95caf01d09afa42648556"} Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.171491 4821 scope.go:117] "RemoveContainer" containerID="9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.171603 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.178058 4821 generic.go:334] "Generic (PLEG): container finished" podID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" exitCode=0 Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.178208 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b825d54c-6a7f-4f40-a9e1-1cd385220b3c","Type":"ContainerDied","Data":"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f"} Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.178262 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b825d54c-6a7f-4f40-a9e1-1cd385220b3c","Type":"ContainerDied","Data":"4155062c64229344039de9dce1fee5a621c5fd8bd58c2ae25bd4888eeefad4cb"} Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.178275 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.206706 4821 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.206805 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g428m\" (UniqueName: \"kubernetes.io/projected/6715a782-71c0-4471-971c-0d7f7fada4a1-kube-api-access-g428m\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.206820 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.206830 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6715a782-71c0-4471-971c-0d7f7fada4a1-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.206839 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6715a782-71c0-4471-971c-0d7f7fada4a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.226745 4821 scope.go:117] "RemoveContainer" containerID="83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.258242 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.272540 4821 scope.go:117] "RemoveContainer" containerID="9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.273611 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d\": container with ID starting with 9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d not found: ID does not exist" containerID="9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.273638 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d"} err="failed to get container status \"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d\": rpc error: code = NotFound desc = could not find container \"9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d\": container with ID starting with 9e6e331360d33ef8daa28e9d192f2a23385cd1fbc546916b2dfa600658c71a1d not found: ID does not exist" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.273656 4821 scope.go:117] "RemoveContainer" containerID="83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.276677 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce\": container with ID starting with 83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce not found: ID does not exist" containerID="83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.276760 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce"} err="failed to get container status \"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce\": rpc error: code = NotFound desc = could not find container \"83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce\": container with ID starting with 83b369a38848bd390aab9fb95d1591da553911682724248fc6a2e9a1f4dff3ce not found: ID does not exist" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.276849 4821 scope.go:117] "RemoveContainer" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.299448 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.320394 4821 scope.go:117] "RemoveContainer" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.324571 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f\": container with ID starting with e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f not found: ID does not exist" containerID="e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.324681 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f"} err="failed to get container status \"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f\": rpc error: code = NotFound desc = could not find container \"e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f\": container with ID starting with e94926283e163be67b35b00cac1c2149db18dd172c5e70f46623813b0117ad3f not found: ID does not exist" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.341174 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.372152 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382170 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382511 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="dnsmasq-dns" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382530 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="dnsmasq-dns" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382547 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382555 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382572 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="init" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382578 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="init" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382590 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c003c616-ec69-450f-b2bd-0a9fb2d84cfa" containerName="nova-manage" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382595 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c003c616-ec69-450f-b2bd-0a9fb2d84cfa" containerName="nova-manage" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382607 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerName="nova-scheduler-scheduler" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382613 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerName="nova-scheduler-scheduler" Sep 30 17:22:53 crc kubenswrapper[4821]: E0930 17:22:53.382627 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-log" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382633 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-log" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382801 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d492225-4a98-4973-b1e6-e24b17dfd0a3" containerName="dnsmasq-dns" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382823 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-metadata" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382831 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" containerName="nova-metadata-log" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382839 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" containerName="nova-scheduler-scheduler" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.382850 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c003c616-ec69-450f-b2bd-0a9fb2d84cfa" containerName="nova-manage" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.383771 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.387447 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.387710 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.406446 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.407631 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.412447 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.415655 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.434205 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528498 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-config-data\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-config-data\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528621 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528685 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5d4\" (UniqueName: \"kubernetes.io/projected/cfc52755-8784-4644-beeb-f91f1ced1245-kube-api-access-6t5d4\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528713 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528737 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc52755-8784-4644-beeb-f91f1ced1245-logs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528759 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtc6c\" (UniqueName: \"kubernetes.io/projected/797f4c29-1e6b-4ecb-a85a-11e859b5b619-kube-api-access-xtc6c\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.528784 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630222 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-config-data\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630334 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-config-data\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630382 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630425 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5d4\" (UniqueName: \"kubernetes.io/projected/cfc52755-8784-4644-beeb-f91f1ced1245-kube-api-access-6t5d4\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630452 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630474 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc52755-8784-4644-beeb-f91f1ced1245-logs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.630497 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtc6c\" (UniqueName: \"kubernetes.io/projected/797f4c29-1e6b-4ecb-a85a-11e859b5b619-kube-api-access-xtc6c\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.631786 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc52755-8784-4644-beeb-f91f1ced1245-logs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.635324 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-config-data\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.636688 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.637218 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f4c29-1e6b-4ecb-a85a-11e859b5b619-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.637657 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.648617 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc52755-8784-4644-beeb-f91f1ced1245-config-data\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.648988 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5d4\" (UniqueName: \"kubernetes.io/projected/cfc52755-8784-4644-beeb-f91f1ced1245-kube-api-access-6t5d4\") pod \"nova-metadata-0\" (UID: \"cfc52755-8784-4644-beeb-f91f1ced1245\") " pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.649966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtc6c\" (UniqueName: \"kubernetes.io/projected/797f4c29-1e6b-4ecb-a85a-11e859b5b619-kube-api-access-xtc6c\") pod \"nova-scheduler-0\" (UID: \"797f4c29-1e6b-4ecb-a85a-11e859b5b619\") " pod="openstack/nova-scheduler-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.702491 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:22:53 crc kubenswrapper[4821]: I0930 17:22:53.737561 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:22:54 crc kubenswrapper[4821]: I0930 17:22:54.160018 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:22:54 crc kubenswrapper[4821]: I0930 17:22:54.191322 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc52755-8784-4644-beeb-f91f1ced1245","Type":"ContainerStarted","Data":"66fc5d8ed112d9873e910daf76fe318e87b9d3e69986a6c75875b3876317cef0"} Sep 30 17:22:54 crc kubenswrapper[4821]: I0930 17:22:54.243752 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:22:54 crc kubenswrapper[4821]: W0930 17:22:54.246898 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod797f4c29_1e6b_4ecb_a85a_11e859b5b619.slice/crio-32729dded9a77246744d4910df66d25e64e2ad7a3ea5caedc5239c4f1816a017 WatchSource:0}: Error finding container 32729dded9a77246744d4910df66d25e64e2ad7a3ea5caedc5239c4f1816a017: Status 404 returned error can't find the container with id 32729dded9a77246744d4910df66d25e64e2ad7a3ea5caedc5239c4f1816a017 Sep 30 17:22:54 crc kubenswrapper[4821]: I0930 17:22:54.717367 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6715a782-71c0-4471-971c-0d7f7fada4a1" path="/var/lib/kubelet/pods/6715a782-71c0-4471-971c-0d7f7fada4a1/volumes" Sep 30 17:22:54 crc kubenswrapper[4821]: I0930 17:22:54.718121 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b825d54c-6a7f-4f40-a9e1-1cd385220b3c" path="/var/lib/kubelet/pods/b825d54c-6a7f-4f40-a9e1-1cd385220b3c/volumes" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.138650 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.203967 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"797f4c29-1e6b-4ecb-a85a-11e859b5b619","Type":"ContainerStarted","Data":"bc6ed062940d342edc809422576c128cac0b12f72ed4322d60b75bd28af54965"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.204050 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"797f4c29-1e6b-4ecb-a85a-11e859b5b619","Type":"ContainerStarted","Data":"32729dded9a77246744d4910df66d25e64e2ad7a3ea5caedc5239c4f1816a017"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.216465 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc52755-8784-4644-beeb-f91f1ced1245","Type":"ContainerStarted","Data":"eb2e4d704547f3a38e8bb08e820fcb97b0df0106344bab6081aa6945630b7f38"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.216509 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfc52755-8784-4644-beeb-f91f1ced1245","Type":"ContainerStarted","Data":"aff45f3ab1085c2637aec007c3a7840bafc31fd7b1d614e3c65b3ed5a9b9d8a8"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.223956 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.223939969 podStartE2EDuration="2.223939969s" podCreationTimestamp="2025-09-30 17:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:55.219990152 +0000 UTC m=+1171.125036096" watchObservedRunningTime="2025-09-30 17:22:55.223939969 +0000 UTC m=+1171.128985913" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.225245 4821 generic.go:334] "Generic (PLEG): container finished" podID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerID="7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90" exitCode=0 Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.225332 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerDied","Data":"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.225395 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed","Type":"ContainerDied","Data":"3c86c3c5d09fcbc950675a3f50a6a18a116cbfe2d927da7d4ade3acf2c8d55b6"} Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.225423 4821 scope.go:117] "RemoveContainer" containerID="7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.225749 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.275397 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.278862 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.278000 4821 scope.go:117] "RemoveContainer" containerID="392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.279616 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.279753 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.279879 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.280293 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kg2b\" (UniqueName: \"kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b\") pod \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\" (UID: \"35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed\") " Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.280304 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs" (OuterVolumeSpecName: "logs") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.290120 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2900581239999998 podStartE2EDuration="2.290058124s" podCreationTimestamp="2025-09-30 17:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:55.27142408 +0000 UTC m=+1171.176470024" watchObservedRunningTime="2025-09-30 17:22:55.290058124 +0000 UTC m=+1171.195104068" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.313493 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b" (OuterVolumeSpecName: "kube-api-access-8kg2b") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "kube-api-access-8kg2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.324755 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data" (OuterVolumeSpecName: "config-data") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.325021 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.352250 4821 scope.go:117] "RemoveContainer" containerID="7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90" Sep 30 17:22:55 crc kubenswrapper[4821]: E0930 17:22:55.352573 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90\": container with ID starting with 7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90 not found: ID does not exist" containerID="7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.352598 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90"} err="failed to get container status \"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90\": rpc error: code = NotFound desc = could not find container \"7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90\": container with ID starting with 7342c0c0c46e010f42d15bae1042edf597941d1af467c92e39f40883c28bfe90 not found: ID does not exist" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.352617 4821 scope.go:117] "RemoveContainer" containerID="392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0" Sep 30 17:22:55 crc kubenswrapper[4821]: E0930 17:22:55.353345 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0\": container with ID starting with 392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0 not found: ID does not exist" containerID="392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.353367 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0"} err="failed to get container status \"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0\": rpc error: code = NotFound desc = could not find container \"392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0\": container with ID starting with 392fd3f3516072e5e4e8d38a1c1660be57b878ad9d2e170bcc4f78adb706ffa0 not found: ID does not exist" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.377133 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.379224 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" (UID: "35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385662 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385732 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385745 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385755 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385765 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kg2b\" (UniqueName: \"kubernetes.io/projected/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-kube-api-access-8kg2b\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.385775 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.582177 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.610379 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.630397 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:55 crc kubenswrapper[4821]: E0930 17:22:55.630895 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-log" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.630918 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-log" Sep 30 17:22:55 crc kubenswrapper[4821]: E0930 17:22:55.630930 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-api" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.630940 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-api" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.631188 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-api" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.631214 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" containerName="nova-api-log" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.632167 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.635453 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.635606 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.637553 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.647072 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793709 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793764 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-config-data\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793829 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zj2g\" (UniqueName: \"kubernetes.io/projected/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-kube-api-access-5zj2g\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793894 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-logs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.793912 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.895474 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-config-data\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.895605 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zj2g\" (UniqueName: \"kubernetes.io/projected/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-kube-api-access-5zj2g\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.895696 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.895716 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-logs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.896219 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-logs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.896508 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.896645 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.899877 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-config-data\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.907600 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.907767 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.911209 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.914560 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zj2g\" (UniqueName: \"kubernetes.io/projected/8e17523e-cf69-41f9-bc54-c6a8a9dcba94-kube-api-access-5zj2g\") pod \"nova-api-0\" (UID: \"8e17523e-cf69-41f9-bc54-c6a8a9dcba94\") " pod="openstack/nova-api-0" Sep 30 17:22:55 crc kubenswrapper[4821]: I0930 17:22:55.952168 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:22:56 crc kubenswrapper[4821]: I0930 17:22:56.399986 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:22:56 crc kubenswrapper[4821]: I0930 17:22:56.718981 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed" path="/var/lib/kubelet/pods/35c8d22a-edbe-4aeb-bfe1-acb768c2f6ed/volumes" Sep 30 17:22:57 crc kubenswrapper[4821]: I0930 17:22:57.268171 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e17523e-cf69-41f9-bc54-c6a8a9dcba94","Type":"ContainerStarted","Data":"f1d32902fe6ad1f4f9d29b6918af8ada19598748ed834159b96b18dca3af2763"} Sep 30 17:22:57 crc kubenswrapper[4821]: I0930 17:22:57.268513 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e17523e-cf69-41f9-bc54-c6a8a9dcba94","Type":"ContainerStarted","Data":"c47b1f2f1c8683bcbdc5f6fc51e2296e3a94910eba1d6766e0c255aa55a44cbd"} Sep 30 17:22:57 crc kubenswrapper[4821]: I0930 17:22:57.268527 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e17523e-cf69-41f9-bc54-c6a8a9dcba94","Type":"ContainerStarted","Data":"46dca15dea1719d1ea4de7c0966040a4d53dfc55929ffa980b3e3d921bc6fd10"} Sep 30 17:22:57 crc kubenswrapper[4821]: I0930 17:22:57.292232 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.292206239 podStartE2EDuration="2.292206239s" podCreationTimestamp="2025-09-30 17:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:22:57.285221975 +0000 UTC m=+1173.190267929" watchObservedRunningTime="2025-09-30 17:22:57.292206239 +0000 UTC m=+1173.197252193" Sep 30 17:22:58 crc kubenswrapper[4821]: I0930 17:22:58.703620 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:58 crc kubenswrapper[4821]: I0930 17:22:58.703897 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:22:58 crc kubenswrapper[4821]: I0930 17:22:58.738197 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:23:03 crc kubenswrapper[4821]: I0930 17:23:03.702950 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:23:03 crc kubenswrapper[4821]: I0930 17:23:03.703321 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:23:03 crc kubenswrapper[4821]: I0930 17:23:03.739116 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:23:03 crc kubenswrapper[4821]: I0930 17:23:03.765156 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:23:04 crc kubenswrapper[4821]: I0930 17:23:04.360732 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:23:04 crc kubenswrapper[4821]: I0930 17:23:04.719427 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cfc52755-8784-4644-beeb-f91f1ced1245" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:23:04 crc kubenswrapper[4821]: I0930 17:23:04.719455 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cfc52755-8784-4644-beeb-f91f1ced1245" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:23:05 crc kubenswrapper[4821]: I0930 17:23:05.953196 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:23:05 crc kubenswrapper[4821]: I0930 17:23:05.953503 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:23:06 crc kubenswrapper[4821]: I0930 17:23:06.967271 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e17523e-cf69-41f9-bc54-c6a8a9dcba94" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:23:06 crc kubenswrapper[4821]: I0930 17:23:06.967301 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e17523e-cf69-41f9-bc54-c6a8a9dcba94" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:23:13 crc kubenswrapper[4821]: I0930 17:23:13.710467 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:23:13 crc kubenswrapper[4821]: I0930 17:23:13.712642 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:23:13 crc kubenswrapper[4821]: I0930 17:23:13.720976 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:23:14 crc kubenswrapper[4821]: I0930 17:23:14.423412 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:23:15 crc kubenswrapper[4821]: I0930 17:23:15.960907 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:23:15 crc kubenswrapper[4821]: I0930 17:23:15.961698 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:23:15 crc kubenswrapper[4821]: I0930 17:23:15.962666 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:23:15 crc kubenswrapper[4821]: I0930 17:23:15.969333 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:23:16 crc kubenswrapper[4821]: I0930 17:23:16.433492 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:23:16 crc kubenswrapper[4821]: I0930 17:23:16.440650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:24:49 crc kubenswrapper[4821]: I0930 17:24:49.349832 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:24:49 crc kubenswrapper[4821]: I0930 17:24:49.350407 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:19 crc kubenswrapper[4821]: I0930 17:25:19.349662 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:25:19 crc kubenswrapper[4821]: I0930 17:25:19.350113 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:27 crc kubenswrapper[4821]: I0930 17:25:27.876918 4821 scope.go:117] "RemoveContainer" containerID="1f0e33029c22e12918aec4bf410912fa3149868dd983dca33f9a1188845cf66a" Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.349695 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.350321 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.350366 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.351795 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.351866 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202" gracePeriod=600 Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.720719 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202" exitCode=0 Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.720912 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202"} Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.721070 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d"} Sep 30 17:25:49 crc kubenswrapper[4821]: I0930 17:25:49.721109 4821 scope.go:117] "RemoveContainer" containerID="1763d8a2cafcce9c75309a8111559b7e2dfe05de5a45a9dc8c3faa88662ff830" Sep 30 17:26:27 crc kubenswrapper[4821]: I0930 17:26:27.927414 4821 scope.go:117] "RemoveContainer" containerID="26b41bd42612ef95a77c5610ac7e0b4e37525a5f648c17072c4581fa06ea382b" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.316741 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.319281 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.341514 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.514905 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hrx\" (UniqueName: \"kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.515134 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.515246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.616895 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4hrx\" (UniqueName: \"kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.617060 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.617676 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.618098 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.617780 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.658206 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4hrx\" (UniqueName: \"kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx\") pod \"community-operators-zrnfd\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:04 crc kubenswrapper[4821]: I0930 17:27:04.937495 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:05 crc kubenswrapper[4821]: I0930 17:27:05.456281 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:06 crc kubenswrapper[4821]: I0930 17:27:06.377771 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea31affd-35c2-4080-87e6-683d42a8d984" containerID="062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f" exitCode=0 Sep 30 17:27:06 crc kubenswrapper[4821]: I0930 17:27:06.377842 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerDied","Data":"062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f"} Sep 30 17:27:06 crc kubenswrapper[4821]: I0930 17:27:06.378162 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerStarted","Data":"e70c33c78de77f6fdc894173aa46aed28a746ae2b47042a4fffdd05610e89f00"} Sep 30 17:27:06 crc kubenswrapper[4821]: I0930 17:27:06.380659 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:27:07 crc kubenswrapper[4821]: I0930 17:27:07.389593 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerStarted","Data":"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62"} Sep 30 17:27:09 crc kubenswrapper[4821]: I0930 17:27:09.405861 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea31affd-35c2-4080-87e6-683d42a8d984" containerID="ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62" exitCode=0 Sep 30 17:27:09 crc kubenswrapper[4821]: I0930 17:27:09.405946 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerDied","Data":"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62"} Sep 30 17:27:10 crc kubenswrapper[4821]: I0930 17:27:10.419414 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerStarted","Data":"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679"} Sep 30 17:27:10 crc kubenswrapper[4821]: I0930 17:27:10.440892 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrnfd" podStartSLOduration=2.860247243 podStartE2EDuration="6.440878629s" podCreationTimestamp="2025-09-30 17:27:04 +0000 UTC" firstStartedPulling="2025-09-30 17:27:06.380370754 +0000 UTC m=+1422.285416698" lastFinishedPulling="2025-09-30 17:27:09.96100214 +0000 UTC m=+1425.866048084" observedRunningTime="2025-09-30 17:27:10.437434653 +0000 UTC m=+1426.342480597" watchObservedRunningTime="2025-09-30 17:27:10.440878629 +0000 UTC m=+1426.345924573" Sep 30 17:27:14 crc kubenswrapper[4821]: I0930 17:27:14.938614 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:14 crc kubenswrapper[4821]: I0930 17:27:14.939301 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:14 crc kubenswrapper[4821]: I0930 17:27:14.992144 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:15 crc kubenswrapper[4821]: I0930 17:27:15.520545 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:15 crc kubenswrapper[4821]: I0930 17:27:15.576573 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:17 crc kubenswrapper[4821]: I0930 17:27:17.473534 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zrnfd" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="registry-server" containerID="cri-o://0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679" gracePeriod=2 Sep 30 17:27:17 crc kubenswrapper[4821]: I0930 17:27:17.992395 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.057219 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities\") pod \"ea31affd-35c2-4080-87e6-683d42a8d984\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.057325 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4hrx\" (UniqueName: \"kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx\") pod \"ea31affd-35c2-4080-87e6-683d42a8d984\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.057513 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content\") pod \"ea31affd-35c2-4080-87e6-683d42a8d984\" (UID: \"ea31affd-35c2-4080-87e6-683d42a8d984\") " Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.058324 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities" (OuterVolumeSpecName: "utilities") pod "ea31affd-35c2-4080-87e6-683d42a8d984" (UID: "ea31affd-35c2-4080-87e6-683d42a8d984"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.064808 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx" (OuterVolumeSpecName: "kube-api-access-c4hrx") pod "ea31affd-35c2-4080-87e6-683d42a8d984" (UID: "ea31affd-35c2-4080-87e6-683d42a8d984"). InnerVolumeSpecName "kube-api-access-c4hrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.112296 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea31affd-35c2-4080-87e6-683d42a8d984" (UID: "ea31affd-35c2-4080-87e6-683d42a8d984"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.160326 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.160590 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea31affd-35c2-4080-87e6-683d42a8d984-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.160659 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4hrx\" (UniqueName: \"kubernetes.io/projected/ea31affd-35c2-4080-87e6-683d42a8d984-kube-api-access-c4hrx\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.482188 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea31affd-35c2-4080-87e6-683d42a8d984" containerID="0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679" exitCode=0 Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.482225 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerDied","Data":"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679"} Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.482251 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrnfd" event={"ID":"ea31affd-35c2-4080-87e6-683d42a8d984","Type":"ContainerDied","Data":"e70c33c78de77f6fdc894173aa46aed28a746ae2b47042a4fffdd05610e89f00"} Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.482268 4821 scope.go:117] "RemoveContainer" containerID="0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.482272 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrnfd" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.512113 4821 scope.go:117] "RemoveContainer" containerID="ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.522138 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.530728 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zrnfd"] Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.576342 4821 scope.go:117] "RemoveContainer" containerID="062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.609302 4821 scope.go:117] "RemoveContainer" containerID="0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679" Sep 30 17:27:18 crc kubenswrapper[4821]: E0930 17:27:18.616248 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679\": container with ID starting with 0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679 not found: ID does not exist" containerID="0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.616295 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679"} err="failed to get container status \"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679\": rpc error: code = NotFound desc = could not find container \"0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679\": container with ID starting with 0966cd706868e02c33da389c698485be5c03e294d29dc55ee52562ced9e1c679 not found: ID does not exist" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.616321 4821 scope.go:117] "RemoveContainer" containerID="ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62" Sep 30 17:27:18 crc kubenswrapper[4821]: E0930 17:27:18.616653 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62\": container with ID starting with ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62 not found: ID does not exist" containerID="ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.616693 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62"} err="failed to get container status \"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62\": rpc error: code = NotFound desc = could not find container \"ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62\": container with ID starting with ee8d40dc9fbc7ebc8728bac9fc0592eba14f4e7bfb4fee6b36358c789a069a62 not found: ID does not exist" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.616733 4821 scope.go:117] "RemoveContainer" containerID="062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f" Sep 30 17:27:18 crc kubenswrapper[4821]: E0930 17:27:18.616993 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f\": container with ID starting with 062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f not found: ID does not exist" containerID="062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.617011 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f"} err="failed to get container status \"062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f\": rpc error: code = NotFound desc = could not find container \"062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f\": container with ID starting with 062d5fb214be44fc19ff3658a29bdebd61a019505f0ca7337e4211616f9b6d1f not found: ID does not exist" Sep 30 17:27:18 crc kubenswrapper[4821]: I0930 17:27:18.716993 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" path="/var/lib/kubelet/pods/ea31affd-35c2-4080-87e6-683d42a8d984/volumes" Sep 30 17:27:27 crc kubenswrapper[4821]: I0930 17:27:27.987041 4821 scope.go:117] "RemoveContainer" containerID="1c24ceef7856b575ffd84a4eeac247e6454d7967a0e7976919e20257b42d15ce" Sep 30 17:27:28 crc kubenswrapper[4821]: I0930 17:27:28.009518 4821 scope.go:117] "RemoveContainer" containerID="ad1fb2e868d1f5b5a7dd362a2e669a7857d58cfff69e134bc0ae56f28a23eb7a" Sep 30 17:27:49 crc kubenswrapper[4821]: I0930 17:27:49.349821 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:27:49 crc kubenswrapper[4821]: I0930 17:27:49.351495 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:19 crc kubenswrapper[4821]: I0930 17:28:19.349699 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:28:19 crc kubenswrapper[4821]: I0930 17:28:19.351510 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:28 crc kubenswrapper[4821]: I0930 17:28:28.066561 4821 scope.go:117] "RemoveContainer" containerID="fd6979c1b177dd7d1e22ab200d53f9c9fb743efab1bb49ad7ad3e4b7ba40cdea" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.741982 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:28:46 crc kubenswrapper[4821]: E0930 17:28:46.743006 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="extract-utilities" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.743025 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="extract-utilities" Sep 30 17:28:46 crc kubenswrapper[4821]: E0930 17:28:46.743066 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="extract-content" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.743076 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="extract-content" Sep 30 17:28:46 crc kubenswrapper[4821]: E0930 17:28:46.743107 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="registry-server" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.743117 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="registry-server" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.743334 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea31affd-35c2-4080-87e6-683d42a8d984" containerName="registry-server" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.744852 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.756640 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.919827 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.920010 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpwh\" (UniqueName: \"kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:46 crc kubenswrapper[4821]: I0930 17:28:46.920129 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.021643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.021746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpwh\" (UniqueName: \"kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.021808 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.022414 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.023495 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.040923 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpwh\" (UniqueName: \"kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh\") pod \"redhat-operators-7548j\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.067647 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:28:47 crc kubenswrapper[4821]: I0930 17:28:47.501790 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:28:47 crc kubenswrapper[4821]: W0930 17:28:47.513761 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83dd6b67_690a_48d1_9662_2c638b9db5c9.slice/crio-0c8977263a4ca097176a014f4f22a4a6a7fc04a7a18d3215b2b0f6d3680c3ed9 WatchSource:0}: Error finding container 0c8977263a4ca097176a014f4f22a4a6a7fc04a7a18d3215b2b0f6d3680c3ed9: Status 404 returned error can't find the container with id 0c8977263a4ca097176a014f4f22a4a6a7fc04a7a18d3215b2b0f6d3680c3ed9 Sep 30 17:28:48 crc kubenswrapper[4821]: I0930 17:28:48.203787 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerID="e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896" exitCode=0 Sep 30 17:28:48 crc kubenswrapper[4821]: I0930 17:28:48.203860 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerDied","Data":"e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896"} Sep 30 17:28:48 crc kubenswrapper[4821]: I0930 17:28:48.204108 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerStarted","Data":"0c8977263a4ca097176a014f4f22a4a6a7fc04a7a18d3215b2b0f6d3680c3ed9"} Sep 30 17:28:49 crc kubenswrapper[4821]: I0930 17:28:49.349882 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:28:49 crc kubenswrapper[4821]: I0930 17:28:49.350261 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:49 crc kubenswrapper[4821]: I0930 17:28:49.350346 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:28:49 crc kubenswrapper[4821]: I0930 17:28:49.351017 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:28:49 crc kubenswrapper[4821]: I0930 17:28:49.351075 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" gracePeriod=600 Sep 30 17:28:49 crc kubenswrapper[4821]: E0930 17:28:49.484183 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:28:50 crc kubenswrapper[4821]: I0930 17:28:50.219879 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" exitCode=0 Sep 30 17:28:50 crc kubenswrapper[4821]: I0930 17:28:50.219909 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d"} Sep 30 17:28:50 crc kubenswrapper[4821]: I0930 17:28:50.220317 4821 scope.go:117] "RemoveContainer" containerID="880f2361d0eb681ddddb941f0b7685c664d231bb6623d0569d1e69f40e5ec202" Sep 30 17:28:50 crc kubenswrapper[4821]: I0930 17:28:50.220936 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:28:50 crc kubenswrapper[4821]: E0930 17:28:50.221302 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:28:50 crc kubenswrapper[4821]: I0930 17:28:50.222879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerStarted","Data":"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d"} Sep 30 17:28:56 crc kubenswrapper[4821]: I0930 17:28:56.041731 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4j6r4"] Sep 30 17:28:56 crc kubenswrapper[4821]: I0930 17:28:56.051451 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4j6r4"] Sep 30 17:28:56 crc kubenswrapper[4821]: I0930 17:28:56.722052 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb6d6a0-c0b6-4260-bcbd-b8af003f848f" path="/var/lib/kubelet/pods/4fb6d6a0-c0b6-4260-bcbd-b8af003f848f/volumes" Sep 30 17:28:57 crc kubenswrapper[4821]: I0930 17:28:57.301532 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerID="c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d" exitCode=0 Sep 30 17:28:57 crc kubenswrapper[4821]: I0930 17:28:57.301613 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerDied","Data":"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d"} Sep 30 17:28:58 crc kubenswrapper[4821]: I0930 17:28:58.312868 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerStarted","Data":"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3"} Sep 30 17:28:58 crc kubenswrapper[4821]: I0930 17:28:58.333763 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7548j" podStartSLOduration=2.745371987 podStartE2EDuration="12.333743738s" podCreationTimestamp="2025-09-30 17:28:46 +0000 UTC" firstStartedPulling="2025-09-30 17:28:48.205525407 +0000 UTC m=+1524.110571351" lastFinishedPulling="2025-09-30 17:28:57.793897158 +0000 UTC m=+1533.698943102" observedRunningTime="2025-09-30 17:28:58.333018571 +0000 UTC m=+1534.238064515" watchObservedRunningTime="2025-09-30 17:28:58.333743738 +0000 UTC m=+1534.238789672" Sep 30 17:29:01 crc kubenswrapper[4821]: I0930 17:29:01.059752 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fmj6j"] Sep 30 17:29:01 crc kubenswrapper[4821]: I0930 17:29:01.066468 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fmj6j"] Sep 30 17:29:02 crc kubenswrapper[4821]: I0930 17:29:02.715934 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9935cd-21ae-4020-b970-1ad0bc26b130" path="/var/lib/kubelet/pods/4b9935cd-21ae-4020-b970-1ad0bc26b130/volumes" Sep 30 17:29:03 crc kubenswrapper[4821]: I0930 17:29:03.707547 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:29:03 crc kubenswrapper[4821]: E0930 17:29:03.708170 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.031326 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-df7p4"] Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.043817 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-24ab-account-create-779jc"] Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.053744 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-24ab-account-create-779jc"] Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.060335 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-df7p4"] Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.718129 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17cc7c47-8e87-4cdd-b233-605733bd7444" path="/var/lib/kubelet/pods/17cc7c47-8e87-4cdd-b233-605733bd7444/volumes" Sep 30 17:29:06 crc kubenswrapper[4821]: I0930 17:29:06.718597 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21178c4c-b5c9-4507-a186-096e63f59c93" path="/var/lib/kubelet/pods/21178c4c-b5c9-4507-a186-096e63f59c93/volumes" Sep 30 17:29:07 crc kubenswrapper[4821]: I0930 17:29:07.067825 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:07 crc kubenswrapper[4821]: I0930 17:29:07.067868 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:07 crc kubenswrapper[4821]: I0930 17:29:07.112860 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:07 crc kubenswrapper[4821]: I0930 17:29:07.458827 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:07 crc kubenswrapper[4821]: I0930 17:29:07.500105 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.418677 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7548j" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="registry-server" containerID="cri-o://84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3" gracePeriod=2 Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.812672 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.873318 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content\") pod \"83dd6b67-690a-48d1-9662-2c638b9db5c9\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.873523 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities\") pod \"83dd6b67-690a-48d1-9662-2c638b9db5c9\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.873656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgpwh\" (UniqueName: \"kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh\") pod \"83dd6b67-690a-48d1-9662-2c638b9db5c9\" (UID: \"83dd6b67-690a-48d1-9662-2c638b9db5c9\") " Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.874598 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities" (OuterVolumeSpecName: "utilities") pod "83dd6b67-690a-48d1-9662-2c638b9db5c9" (UID: "83dd6b67-690a-48d1-9662-2c638b9db5c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.885979 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh" (OuterVolumeSpecName: "kube-api-access-kgpwh") pod "83dd6b67-690a-48d1-9662-2c638b9db5c9" (UID: "83dd6b67-690a-48d1-9662-2c638b9db5c9"). InnerVolumeSpecName "kube-api-access-kgpwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.957570 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83dd6b67-690a-48d1-9662-2c638b9db5c9" (UID: "83dd6b67-690a-48d1-9662-2c638b9db5c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.976102 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.976163 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpwh\" (UniqueName: \"kubernetes.io/projected/83dd6b67-690a-48d1-9662-2c638b9db5c9-kube-api-access-kgpwh\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:09 crc kubenswrapper[4821]: I0930 17:29:09.976190 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83dd6b67-690a-48d1-9662-2c638b9db5c9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.431719 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerID="84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3" exitCode=0 Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.431792 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7548j" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.431818 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerDied","Data":"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3"} Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.431915 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7548j" event={"ID":"83dd6b67-690a-48d1-9662-2c638b9db5c9","Type":"ContainerDied","Data":"0c8977263a4ca097176a014f4f22a4a6a7fc04a7a18d3215b2b0f6d3680c3ed9"} Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.431987 4821 scope.go:117] "RemoveContainer" containerID="84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.461236 4821 scope.go:117] "RemoveContainer" containerID="c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.462799 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.469927 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7548j"] Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.490523 4821 scope.go:117] "RemoveContainer" containerID="e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.537936 4821 scope.go:117] "RemoveContainer" containerID="84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3" Sep 30 17:29:10 crc kubenswrapper[4821]: E0930 17:29:10.538510 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3\": container with ID starting with 84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3 not found: ID does not exist" containerID="84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.538561 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3"} err="failed to get container status \"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3\": rpc error: code = NotFound desc = could not find container \"84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3\": container with ID starting with 84f0c14dd79422a1a470203822ebcde9aab4de973ef922b3812aeca72e9be4e3 not found: ID does not exist" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.538592 4821 scope.go:117] "RemoveContainer" containerID="c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d" Sep 30 17:29:10 crc kubenswrapper[4821]: E0930 17:29:10.539107 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d\": container with ID starting with c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d not found: ID does not exist" containerID="c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.539141 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d"} err="failed to get container status \"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d\": rpc error: code = NotFound desc = could not find container \"c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d\": container with ID starting with c06ec1266635667a8fe4de8e0ccad9f66e498c491073e7a4682f21e6ef34201d not found: ID does not exist" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.539163 4821 scope.go:117] "RemoveContainer" containerID="e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896" Sep 30 17:29:10 crc kubenswrapper[4821]: E0930 17:29:10.539472 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896\": container with ID starting with e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896 not found: ID does not exist" containerID="e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.539504 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896"} err="failed to get container status \"e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896\": rpc error: code = NotFound desc = could not find container \"e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896\": container with ID starting with e424e47475841617acab5f51d7751480dcc6721a4da20ef2c46d151da4642896 not found: ID does not exist" Sep 30 17:29:10 crc kubenswrapper[4821]: I0930 17:29:10.717327 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" path="/var/lib/kubelet/pods/83dd6b67-690a-48d1-9662-2c638b9db5c9/volumes" Sep 30 17:29:11 crc kubenswrapper[4821]: I0930 17:29:11.025212 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8c12-account-create-pdc6j"] Sep 30 17:29:11 crc kubenswrapper[4821]: I0930 17:29:11.032471 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8c12-account-create-pdc6j"] Sep 30 17:29:12 crc kubenswrapper[4821]: I0930 17:29:12.716354 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d940f8-4f55-4c00-a99c-e918eb97c401" path="/var/lib/kubelet/pods/30d940f8-4f55-4c00-a99c-e918eb97c401/volumes" Sep 30 17:29:13 crc kubenswrapper[4821]: I0930 17:29:13.023483 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h9rqx"] Sep 30 17:29:13 crc kubenswrapper[4821]: I0930 17:29:13.029852 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h9rqx"] Sep 30 17:29:13 crc kubenswrapper[4821]: I0930 17:29:13.043480 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qvvlk"] Sep 30 17:29:13 crc kubenswrapper[4821]: I0930 17:29:13.054209 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qvvlk"] Sep 30 17:29:14 crc kubenswrapper[4821]: I0930 17:29:14.717840 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c144f3-5510-466a-b6d7-8a66896a5a89" path="/var/lib/kubelet/pods/77c144f3-5510-466a-b6d7-8a66896a5a89/volumes" Sep 30 17:29:14 crc kubenswrapper[4821]: I0930 17:29:14.718801 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c66275b-5969-4d5b-94b3-e5b7af477685" path="/var/lib/kubelet/pods/9c66275b-5969-4d5b-94b3-e5b7af477685/volumes" Sep 30 17:29:16 crc kubenswrapper[4821]: I0930 17:29:16.036327 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-46a4-account-create-qkt5h"] Sep 30 17:29:16 crc kubenswrapper[4821]: I0930 17:29:16.046233 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-46a4-account-create-qkt5h"] Sep 30 17:29:16 crc kubenswrapper[4821]: I0930 17:29:16.720153 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bf384d-37cd-463e-b991-e4ba8646dd99" path="/var/lib/kubelet/pods/26bf384d-37cd-463e-b991-e4ba8646dd99/volumes" Sep 30 17:29:17 crc kubenswrapper[4821]: I0930 17:29:17.706824 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:29:17 crc kubenswrapper[4821]: E0930 17:29:17.707335 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.119913 4821 scope.go:117] "RemoveContainer" containerID="eb5f276145bbba19b7088a9c8324ea2a937e5f7dffdc32cbc2b0d93314a0f626" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.162151 4821 scope.go:117] "RemoveContainer" containerID="b122a5c3c8a7f2e2b13e62e079dd944939a901e4c28bed1302d4e2332a0df4ea" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.204919 4821 scope.go:117] "RemoveContainer" containerID="bc5632130b2444ef6061bcf525d53b20c45f0530d414a60bc0f0e320400213ee" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.257523 4821 scope.go:117] "RemoveContainer" containerID="e77a8a506ac11ebdf511feb7b6a60516b9aef94db6df12a16013b03a3176e3ce" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.288538 4821 scope.go:117] "RemoveContainer" containerID="8817fb7a7429171b786358c114c47effa44284cd366c3e9219d699d2b5f40178" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.329163 4821 scope.go:117] "RemoveContainer" containerID="ea94d2a9ec6e1fc9b4ce6212ffe1316bb4d6debd03f2ded2c6eb1458a8c21e93" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.360141 4821 scope.go:117] "RemoveContainer" containerID="d17081a9494e9b22bcc37842380d01c1e3f9a28a80bee9c39bcd8b61ab67a8ce" Sep 30 17:29:28 crc kubenswrapper[4821]: I0930 17:29:28.375546 4821 scope.go:117] "RemoveContainer" containerID="f5979dec0f7d1d1c8fb457a5a8d5ed103279b6c225851f41508858cfe6624709" Sep 30 17:29:31 crc kubenswrapper[4821]: I0930 17:29:31.030795 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9d55-account-create-6fkq5"] Sep 30 17:29:31 crc kubenswrapper[4821]: I0930 17:29:31.042090 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d836-account-create-9k2wv"] Sep 30 17:29:31 crc kubenswrapper[4821]: I0930 17:29:31.050777 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9d55-account-create-6fkq5"] Sep 30 17:29:31 crc kubenswrapper[4821]: I0930 17:29:31.063259 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d836-account-create-9k2wv"] Sep 30 17:29:32 crc kubenswrapper[4821]: I0930 17:29:32.707332 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:29:32 crc kubenswrapper[4821]: E0930 17:29:32.707781 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:29:32 crc kubenswrapper[4821]: I0930 17:29:32.716148 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595e96f3-23bb-4671-a36d-332140fdeb05" path="/var/lib/kubelet/pods/595e96f3-23bb-4671-a36d-332140fdeb05/volumes" Sep 30 17:29:32 crc kubenswrapper[4821]: I0930 17:29:32.716914 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03e9a0d-52a6-4f78-9669-ea77e8d009a0" path="/var/lib/kubelet/pods/e03e9a0d-52a6-4f78-9669-ea77e8d009a0/volumes" Sep 30 17:29:35 crc kubenswrapper[4821]: I0930 17:29:35.050781 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tmcbt"] Sep 30 17:29:35 crc kubenswrapper[4821]: I0930 17:29:35.061395 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tmcbt"] Sep 30 17:29:36 crc kubenswrapper[4821]: I0930 17:29:36.035612 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gfht9"] Sep 30 17:29:36 crc kubenswrapper[4821]: I0930 17:29:36.050780 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gfht9"] Sep 30 17:29:36 crc kubenswrapper[4821]: I0930 17:29:36.718232 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b2a1e3-7600-4e3f-a2ec-91983582bfa0" path="/var/lib/kubelet/pods/76b2a1e3-7600-4e3f-a2ec-91983582bfa0/volumes" Sep 30 17:29:36 crc kubenswrapper[4821]: I0930 17:29:36.718958 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a311a148-21f0-4b76-81f6-c9190d61a8c3" path="/var/lib/kubelet/pods/a311a148-21f0-4b76-81f6-c9190d61a8c3/volumes" Sep 30 17:29:44 crc kubenswrapper[4821]: I0930 17:29:44.713845 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:29:44 crc kubenswrapper[4821]: E0930 17:29:44.714340 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:29:57 crc kubenswrapper[4821]: I0930 17:29:57.708176 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:29:57 crc kubenswrapper[4821]: E0930 17:29:57.708944 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.141660 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8"] Sep 30 17:30:00 crc kubenswrapper[4821]: E0930 17:30:00.142341 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="extract-content" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.142352 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="extract-content" Sep 30 17:30:00 crc kubenswrapper[4821]: E0930 17:30:00.142373 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="registry-server" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.142379 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="registry-server" Sep 30 17:30:00 crc kubenswrapper[4821]: E0930 17:30:00.142423 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="extract-utilities" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.142430 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="extract-utilities" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.142650 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dd6b67-690a-48d1-9662-2c638b9db5c9" containerName="registry-server" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.143225 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.148651 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.148857 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.156802 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8"] Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.346192 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjkj\" (UniqueName: \"kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.346479 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.347056 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.448404 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.448460 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfjkj\" (UniqueName: \"kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.448485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.449968 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.466098 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.468357 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfjkj\" (UniqueName: \"kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj\") pod \"collect-profiles-29320890-dnpv8\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.481267 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:00 crc kubenswrapper[4821]: I0930 17:30:00.901189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8"] Sep 30 17:30:01 crc kubenswrapper[4821]: I0930 17:30:01.859623 4821 generic.go:334] "Generic (PLEG): container finished" podID="f736d515-f8b1-4434-8f52-310734fcd6d1" containerID="62be6bb0dc5c5f0e2f99f06a2d95d4a7325c90e2fbfe95a046146fe04af2412d" exitCode=0 Sep 30 17:30:01 crc kubenswrapper[4821]: I0930 17:30:01.859728 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" event={"ID":"f736d515-f8b1-4434-8f52-310734fcd6d1","Type":"ContainerDied","Data":"62be6bb0dc5c5f0e2f99f06a2d95d4a7325c90e2fbfe95a046146fe04af2412d"} Sep 30 17:30:01 crc kubenswrapper[4821]: I0930 17:30:01.859975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" event={"ID":"f736d515-f8b1-4434-8f52-310734fcd6d1","Type":"ContainerStarted","Data":"793bc29c761d877c0c88723593198fecfb8c62034758f2cefbdf7bb2334e5c2e"} Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.200440 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.212055 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume\") pod \"f736d515-f8b1-4434-8f52-310734fcd6d1\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.212473 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfjkj\" (UniqueName: \"kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj\") pod \"f736d515-f8b1-4434-8f52-310734fcd6d1\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.212723 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume\") pod \"f736d515-f8b1-4434-8f52-310734fcd6d1\" (UID: \"f736d515-f8b1-4434-8f52-310734fcd6d1\") " Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.213558 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f736d515-f8b1-4434-8f52-310734fcd6d1" (UID: "f736d515-f8b1-4434-8f52-310734fcd6d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.224038 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj" (OuterVolumeSpecName: "kube-api-access-tfjkj") pod "f736d515-f8b1-4434-8f52-310734fcd6d1" (UID: "f736d515-f8b1-4434-8f52-310734fcd6d1"). InnerVolumeSpecName "kube-api-access-tfjkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.228353 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f736d515-f8b1-4434-8f52-310734fcd6d1" (UID: "f736d515-f8b1-4434-8f52-310734fcd6d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.314683 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f736d515-f8b1-4434-8f52-310734fcd6d1-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.315172 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfjkj\" (UniqueName: \"kubernetes.io/projected/f736d515-f8b1-4434-8f52-310734fcd6d1-kube-api-access-tfjkj\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.315339 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f736d515-f8b1-4434-8f52-310734fcd6d1-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.878067 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" event={"ID":"f736d515-f8b1-4434-8f52-310734fcd6d1","Type":"ContainerDied","Data":"793bc29c761d877c0c88723593198fecfb8c62034758f2cefbdf7bb2334e5c2e"} Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.878431 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793bc29c761d877c0c88723593198fecfb8c62034758f2cefbdf7bb2334e5c2e" Sep 30 17:30:03 crc kubenswrapper[4821]: I0930 17:30:03.878140 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dnpv8" Sep 30 17:30:05 crc kubenswrapper[4821]: I0930 17:30:05.039450 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jjbr2"] Sep 30 17:30:05 crc kubenswrapper[4821]: I0930 17:30:05.049887 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jjbr2"] Sep 30 17:30:06 crc kubenswrapper[4821]: I0930 17:30:06.717808 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2b06c3-690a-469e-bdf6-5033d9be88e8" path="/var/lib/kubelet/pods/5a2b06c3-690a-469e-bdf6-5033d9be88e8/volumes" Sep 30 17:30:09 crc kubenswrapper[4821]: I0930 17:30:09.707848 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:30:09 crc kubenswrapper[4821]: E0930 17:30:09.708536 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:30:10 crc kubenswrapper[4821]: I0930 17:30:10.024502 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mk2vd"] Sep 30 17:30:10 crc kubenswrapper[4821]: I0930 17:30:10.065494 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mk2vd"] Sep 30 17:30:10 crc kubenswrapper[4821]: I0930 17:30:10.718315 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1226c0-ea59-4c57-9837-cafbb926f373" path="/var/lib/kubelet/pods/7b1226c0-ea59-4c57-9837-cafbb926f373/volumes" Sep 30 17:30:15 crc kubenswrapper[4821]: I0930 17:30:15.030471 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j72gp"] Sep 30 17:30:15 crc kubenswrapper[4821]: I0930 17:30:15.041545 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j72gp"] Sep 30 17:30:16 crc kubenswrapper[4821]: I0930 17:30:16.722793 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008af46f-5c9c-44f6-beb7-fa105649d52b" path="/var/lib/kubelet/pods/008af46f-5c9c-44f6-beb7-fa105649d52b/volumes" Sep 30 17:30:24 crc kubenswrapper[4821]: I0930 17:30:24.712468 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:30:24 crc kubenswrapper[4821]: E0930 17:30:24.712951 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:30:27 crc kubenswrapper[4821]: I0930 17:30:27.057687 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kr849"] Sep 30 17:30:27 crc kubenswrapper[4821]: I0930 17:30:27.067568 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kr849"] Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.521062 4821 scope.go:117] "RemoveContainer" containerID="34617b1a24cead19921d6337ba964022401b56a3e645b303d906029c2264eb59" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.646353 4821 scope.go:117] "RemoveContainer" containerID="41d623733426c4d9b8b2a417804dd99ac7a2c7f7f9bf772a0b593072e55ffa2f" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.683393 4821 scope.go:117] "RemoveContainer" containerID="bf17ce3e4f6832c77e8836b5040626e70279ca0959c8ed345715aa513f668b2e" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.715756 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa40c0f-e07d-43de-92d6-60ba8d6b668d" path="/var/lib/kubelet/pods/9aa40c0f-e07d-43de-92d6-60ba8d6b668d/volumes" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.716595 4821 scope.go:117] "RemoveContainer" containerID="00ff686225dc4b40c1ffbd91cceca8a73995eee175c3caeb4e2fada4e36e43e3" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.752651 4821 scope.go:117] "RemoveContainer" containerID="916c649311d7d146db7ff3ac5f14fd5dbb537ccfa5c1de5bbbaae81a888cb028" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.793231 4821 scope.go:117] "RemoveContainer" containerID="bdf6b5a2b3f78680a27b29b3d3d3295f665cba81740912d70f221c7049d0db32" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.820768 4821 scope.go:117] "RemoveContainer" containerID="d38d2bb0cbe872d7cf48cbd1f17e8efbe212914c1257e9ec428fe8aa5fe2bfec" Sep 30 17:30:28 crc kubenswrapper[4821]: I0930 17:30:28.840466 4821 scope.go:117] "RemoveContainer" containerID="a0cd1c0621763e1183f5cf6297cd3a609ca8904e9e3fdd0ad2401660d31688f2" Sep 30 17:30:35 crc kubenswrapper[4821]: I0930 17:30:35.707051 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:30:35 crc kubenswrapper[4821]: E0930 17:30:35.708038 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:30:48 crc kubenswrapper[4821]: I0930 17:30:48.708382 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:30:48 crc kubenswrapper[4821]: E0930 17:30:48.709344 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.042359 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-knwz7"] Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.053362 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wdd45"] Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.062986 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sx76g"] Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.070756 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wdd45"] Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.078731 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sx76g"] Sep 30 17:31:01 crc kubenswrapper[4821]: I0930 17:31:01.084568 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-knwz7"] Sep 30 17:31:02 crc kubenswrapper[4821]: I0930 17:31:02.708162 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:31:02 crc kubenswrapper[4821]: E0930 17:31:02.708829 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:02 crc kubenswrapper[4821]: I0930 17:31:02.723486 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c601d1-e185-448f-81b1-1b42fbe9bb3f" path="/var/lib/kubelet/pods/44c601d1-e185-448f-81b1-1b42fbe9bb3f/volumes" Sep 30 17:31:02 crc kubenswrapper[4821]: I0930 17:31:02.725952 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e" path="/var/lib/kubelet/pods/81f8ab48-a8ad-47e5-b71e-0fe3d39bff5e/volumes" Sep 30 17:31:02 crc kubenswrapper[4821]: I0930 17:31:02.726474 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00a4364-421a-47dc-8dc9-a8a54f71f563" path="/var/lib/kubelet/pods/a00a4364-421a-47dc-8dc9-a8a54f71f563/volumes" Sep 30 17:31:07 crc kubenswrapper[4821]: I0930 17:31:07.036304 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9057-account-create-5jkzz"] Sep 30 17:31:07 crc kubenswrapper[4821]: I0930 17:31:07.045009 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9057-account-create-5jkzz"] Sep 30 17:31:08 crc kubenswrapper[4821]: I0930 17:31:08.717753 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b7e625-a3dd-436c-9dba-8452958f101d" path="/var/lib/kubelet/pods/b7b7e625-a3dd-436c-9dba-8452958f101d/volumes" Sep 30 17:31:16 crc kubenswrapper[4821]: I0930 17:31:16.707456 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:31:16 crc kubenswrapper[4821]: E0930 17:31:16.708945 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:17 crc kubenswrapper[4821]: I0930 17:31:17.068412 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0881-account-create-s2482"] Sep 30 17:31:17 crc kubenswrapper[4821]: I0930 17:31:17.081466 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0881-account-create-s2482"] Sep 30 17:31:17 crc kubenswrapper[4821]: I0930 17:31:17.089258 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-27c1-account-create-z6qzm"] Sep 30 17:31:17 crc kubenswrapper[4821]: I0930 17:31:17.102538 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-27c1-account-create-z6qzm"] Sep 30 17:31:18 crc kubenswrapper[4821]: I0930 17:31:18.716398 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c218e4-5e8b-4a1b-83ff-6bde410b1bab" path="/var/lib/kubelet/pods/84c218e4-5e8b-4a1b-83ff-6bde410b1bab/volumes" Sep 30 17:31:18 crc kubenswrapper[4821]: I0930 17:31:18.717338 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2911be1-4bd5-4cbc-82f9-c08836c4d50a" path="/var/lib/kubelet/pods/a2911be1-4bd5-4cbc-82f9-c08836c4d50a/volumes" Sep 30 17:31:28 crc kubenswrapper[4821]: I0930 17:31:28.707360 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:31:28 crc kubenswrapper[4821]: E0930 17:31:28.708017 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:28 crc kubenswrapper[4821]: I0930 17:31:28.976198 4821 scope.go:117] "RemoveContainer" containerID="9b0d8976e1096d6476ab0e99d8f702edd0b1789ca31b7f5fa08e54adc96c3165" Sep 30 17:31:29 crc kubenswrapper[4821]: I0930 17:31:29.006739 4821 scope.go:117] "RemoveContainer" containerID="bcb3a3caa45f4d3faba7906f1180cb61d00db893a87459507a7a966f6d865c16" Sep 30 17:31:29 crc kubenswrapper[4821]: I0930 17:31:29.057677 4821 scope.go:117] "RemoveContainer" containerID="6058b6cefab7c1516d9467bc7f648ea77502334eb371b5559250ddaf2cb601df" Sep 30 17:31:29 crc kubenswrapper[4821]: I0930 17:31:29.113463 4821 scope.go:117] "RemoveContainer" containerID="0923cc06fe202faa8dc5933a7426f2ddabfc52f5924962ebcd43a634ed3e35eb" Sep 30 17:31:29 crc kubenswrapper[4821]: I0930 17:31:29.155119 4821 scope.go:117] "RemoveContainer" containerID="cf90332090bbed33666f9731a7ea4a122519d567a5e1d351c750a17823a716e3" Sep 30 17:31:29 crc kubenswrapper[4821]: I0930 17:31:29.206989 4821 scope.go:117] "RemoveContainer" containerID="bf02100b3ea37cd5bbafadfc3b22788f79578fa4d776407fc1d00e8fc5e0ecc6" Sep 30 17:31:39 crc kubenswrapper[4821]: I0930 17:31:39.045354 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6rmp"] Sep 30 17:31:39 crc kubenswrapper[4821]: I0930 17:31:39.057152 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6rmp"] Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.535980 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:40 crc kubenswrapper[4821]: E0930 17:31:40.536764 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f736d515-f8b1-4434-8f52-310734fcd6d1" containerName="collect-profiles" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.536779 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f736d515-f8b1-4434-8f52-310734fcd6d1" containerName="collect-profiles" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.537004 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f736d515-f8b1-4434-8f52-310734fcd6d1" containerName="collect-profiles" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.538558 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.541974 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.659672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.659743 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzsh\" (UniqueName: \"kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.659786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.716272 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cd6dfc-6145-4325-b100-ace6b130ad73" path="/var/lib/kubelet/pods/b8cd6dfc-6145-4325-b100-ace6b130ad73/volumes" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.761441 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.761500 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzsh\" (UniqueName: \"kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.761558 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.762278 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.762354 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.783247 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzsh\" (UniqueName: \"kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh\") pod \"certified-operators-8zbld\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:40 crc kubenswrapper[4821]: I0930 17:31:40.864441 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:41 crc kubenswrapper[4821]: I0930 17:31:41.303592 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:41 crc kubenswrapper[4821]: I0930 17:31:41.781241 4821 generic.go:334] "Generic (PLEG): container finished" podID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerID="75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe" exitCode=0 Sep 30 17:31:41 crc kubenswrapper[4821]: I0930 17:31:41.781289 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerDied","Data":"75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe"} Sep 30 17:31:41 crc kubenswrapper[4821]: I0930 17:31:41.781320 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerStarted","Data":"024fe2dbd6b42bb2dc42710bc9276093f5c2675f88d134d3e85f74d702f2b5ea"} Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.707542 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:31:42 crc kubenswrapper[4821]: E0930 17:31:42.708103 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.720570 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.722307 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.736530 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.789528 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerStarted","Data":"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf"} Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.796391 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.796533 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.796576 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6hd\" (UniqueName: \"kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.897784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.897837 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6hd\" (UniqueName: \"kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.897919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.898397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.898711 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:42 crc kubenswrapper[4821]: I0930 17:31:42.916976 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6hd\" (UniqueName: \"kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd\") pod \"redhat-marketplace-qlvth\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:43 crc kubenswrapper[4821]: I0930 17:31:43.037704 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:43 crc kubenswrapper[4821]: I0930 17:31:43.478360 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:43 crc kubenswrapper[4821]: I0930 17:31:43.799420 4821 generic.go:334] "Generic (PLEG): container finished" podID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerID="7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1" exitCode=0 Sep 30 17:31:43 crc kubenswrapper[4821]: I0930 17:31:43.799482 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerDied","Data":"7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1"} Sep 30 17:31:43 crc kubenswrapper[4821]: I0930 17:31:43.799766 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerStarted","Data":"ee5c4fd913db50382066e8a19559bdba32fe7c55de78b00e0305e9ea65a72648"} Sep 30 17:31:44 crc kubenswrapper[4821]: I0930 17:31:44.809964 4821 generic.go:334] "Generic (PLEG): container finished" podID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerID="0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf" exitCode=0 Sep 30 17:31:44 crc kubenswrapper[4821]: I0930 17:31:44.810036 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerDied","Data":"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf"} Sep 30 17:31:44 crc kubenswrapper[4821]: I0930 17:31:44.813428 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerStarted","Data":"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01"} Sep 30 17:31:45 crc kubenswrapper[4821]: I0930 17:31:45.822263 4821 generic.go:334] "Generic (PLEG): container finished" podID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerID="ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01" exitCode=0 Sep 30 17:31:45 crc kubenswrapper[4821]: I0930 17:31:45.822306 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerDied","Data":"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01"} Sep 30 17:31:45 crc kubenswrapper[4821]: I0930 17:31:45.825172 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerStarted","Data":"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699"} Sep 30 17:31:45 crc kubenswrapper[4821]: I0930 17:31:45.881607 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8zbld" podStartSLOduration=2.22451248 podStartE2EDuration="5.881591108s" podCreationTimestamp="2025-09-30 17:31:40 +0000 UTC" firstStartedPulling="2025-09-30 17:31:41.782412984 +0000 UTC m=+1697.687458948" lastFinishedPulling="2025-09-30 17:31:45.439491592 +0000 UTC m=+1701.344537576" observedRunningTime="2025-09-30 17:31:45.878533833 +0000 UTC m=+1701.783579787" watchObservedRunningTime="2025-09-30 17:31:45.881591108 +0000 UTC m=+1701.786637052" Sep 30 17:31:46 crc kubenswrapper[4821]: I0930 17:31:46.835519 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerStarted","Data":"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9"} Sep 30 17:31:46 crc kubenswrapper[4821]: I0930 17:31:46.863797 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qlvth" podStartSLOduration=2.349391389 podStartE2EDuration="4.863777269s" podCreationTimestamp="2025-09-30 17:31:42 +0000 UTC" firstStartedPulling="2025-09-30 17:31:43.800947207 +0000 UTC m=+1699.705993151" lastFinishedPulling="2025-09-30 17:31:46.315333067 +0000 UTC m=+1702.220379031" observedRunningTime="2025-09-30 17:31:46.856730664 +0000 UTC m=+1702.761776648" watchObservedRunningTime="2025-09-30 17:31:46.863777269 +0000 UTC m=+1702.768823223" Sep 30 17:31:50 crc kubenswrapper[4821]: I0930 17:31:50.865059 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:50 crc kubenswrapper[4821]: I0930 17:31:50.865657 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:50 crc kubenswrapper[4821]: I0930 17:31:50.913592 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:51 crc kubenswrapper[4821]: I0930 17:31:51.927316 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.037872 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.038796 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.080130 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.711275 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.896927 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8zbld" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="registry-server" containerID="cri-o://2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699" gracePeriod=2 Sep 30 17:31:53 crc kubenswrapper[4821]: I0930 17:31:53.944868 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.302251 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.416470 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content\") pod \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.417275 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzsh\" (UniqueName: \"kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh\") pod \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.417334 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities\") pod \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\" (UID: \"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4\") " Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.418351 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities" (OuterVolumeSpecName: "utilities") pod "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" (UID: "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.421764 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh" (OuterVolumeSpecName: "kube-api-access-sjzsh") pod "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" (UID: "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4"). InnerVolumeSpecName "kube-api-access-sjzsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.459560 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" (UID: "c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.518883 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.518918 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzsh\" (UniqueName: \"kubernetes.io/projected/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-kube-api-access-sjzsh\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.518933 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.907635 4821 generic.go:334] "Generic (PLEG): container finished" podID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerID="2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699" exitCode=0 Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.907900 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerDied","Data":"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699"} Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.907966 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zbld" event={"ID":"c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4","Type":"ContainerDied","Data":"024fe2dbd6b42bb2dc42710bc9276093f5c2675f88d134d3e85f74d702f2b5ea"} Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.907991 4821 scope.go:117] "RemoveContainer" containerID="2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.908232 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zbld" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.936058 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.938317 4821 scope.go:117] "RemoveContainer" containerID="0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf" Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.945868 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8zbld"] Sep 30 17:31:54 crc kubenswrapper[4821]: I0930 17:31:54.958193 4821 scope.go:117] "RemoveContainer" containerID="75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.010298 4821 scope.go:117] "RemoveContainer" containerID="2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699" Sep 30 17:31:55 crc kubenswrapper[4821]: E0930 17:31:55.010864 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699\": container with ID starting with 2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699 not found: ID does not exist" containerID="2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.010954 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699"} err="failed to get container status \"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699\": rpc error: code = NotFound desc = could not find container \"2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699\": container with ID starting with 2ff037c5ddcbdabc5fe64dd91daa0549602b52168c8961dd8a02e370607d5699 not found: ID does not exist" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.011038 4821 scope.go:117] "RemoveContainer" containerID="0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf" Sep 30 17:31:55 crc kubenswrapper[4821]: E0930 17:31:55.011439 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf\": container with ID starting with 0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf not found: ID does not exist" containerID="0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.011490 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf"} err="failed to get container status \"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf\": rpc error: code = NotFound desc = could not find container \"0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf\": container with ID starting with 0bad2976882678feff60f0035e9cfa5b6651d2e90850df7f7c7c3733c048febf not found: ID does not exist" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.011524 4821 scope.go:117] "RemoveContainer" containerID="75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe" Sep 30 17:31:55 crc kubenswrapper[4821]: E0930 17:31:55.012465 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe\": container with ID starting with 75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe not found: ID does not exist" containerID="75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.012618 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe"} err="failed to get container status \"75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe\": rpc error: code = NotFound desc = could not find container \"75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe\": container with ID starting with 75fc255f9cdc2a7d4bbbe011b4a7f5954ae5a7eb9d61a149bfce7df2efc5c2fe not found: ID does not exist" Sep 30 17:31:55 crc kubenswrapper[4821]: I0930 17:31:55.508719 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:56 crc kubenswrapper[4821]: I0930 17:31:56.707265 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:31:56 crc kubenswrapper[4821]: E0930 17:31:56.708161 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:31:56 crc kubenswrapper[4821]: I0930 17:31:56.724189 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" path="/var/lib/kubelet/pods/c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4/volumes" Sep 30 17:31:56 crc kubenswrapper[4821]: I0930 17:31:56.930763 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qlvth" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="registry-server" containerID="cri-o://6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9" gracePeriod=2 Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.389213 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.471634 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities\") pod \"0cccfa37-4baf-47a6-b302-86a07cae2878\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.471830 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg6hd\" (UniqueName: \"kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd\") pod \"0cccfa37-4baf-47a6-b302-86a07cae2878\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.471876 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content\") pod \"0cccfa37-4baf-47a6-b302-86a07cae2878\" (UID: \"0cccfa37-4baf-47a6-b302-86a07cae2878\") " Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.473071 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities" (OuterVolumeSpecName: "utilities") pod "0cccfa37-4baf-47a6-b302-86a07cae2878" (UID: "0cccfa37-4baf-47a6-b302-86a07cae2878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.478468 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd" (OuterVolumeSpecName: "kube-api-access-rg6hd") pod "0cccfa37-4baf-47a6-b302-86a07cae2878" (UID: "0cccfa37-4baf-47a6-b302-86a07cae2878"). InnerVolumeSpecName "kube-api-access-rg6hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.484375 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cccfa37-4baf-47a6-b302-86a07cae2878" (UID: "0cccfa37-4baf-47a6-b302-86a07cae2878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.573276 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg6hd\" (UniqueName: \"kubernetes.io/projected/0cccfa37-4baf-47a6-b302-86a07cae2878-kube-api-access-rg6hd\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.573305 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.573314 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cccfa37-4baf-47a6-b302-86a07cae2878-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.940835 4821 generic.go:334] "Generic (PLEG): container finished" podID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerID="6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9" exitCode=0 Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.940910 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlvth" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.940895 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerDied","Data":"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9"} Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.940990 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlvth" event={"ID":"0cccfa37-4baf-47a6-b302-86a07cae2878","Type":"ContainerDied","Data":"ee5c4fd913db50382066e8a19559bdba32fe7c55de78b00e0305e9ea65a72648"} Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.941007 4821 scope.go:117] "RemoveContainer" containerID="6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.959697 4821 scope.go:117] "RemoveContainer" containerID="ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01" Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.989538 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:57 crc kubenswrapper[4821]: I0930 17:31:57.995616 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlvth"] Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.006096 4821 scope.go:117] "RemoveContainer" containerID="7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.027785 4821 scope.go:117] "RemoveContainer" containerID="6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9" Sep 30 17:31:58 crc kubenswrapper[4821]: E0930 17:31:58.028292 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9\": container with ID starting with 6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9 not found: ID does not exist" containerID="6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.028326 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9"} err="failed to get container status \"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9\": rpc error: code = NotFound desc = could not find container \"6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9\": container with ID starting with 6fe47120bd90d91f5567eeb0a9282d3db9e9315a24d29ada85725f1cf6a25bf9 not found: ID does not exist" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.028351 4821 scope.go:117] "RemoveContainer" containerID="ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01" Sep 30 17:31:58 crc kubenswrapper[4821]: E0930 17:31:58.028839 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01\": container with ID starting with ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01 not found: ID does not exist" containerID="ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.028864 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01"} err="failed to get container status \"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01\": rpc error: code = NotFound desc = could not find container \"ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01\": container with ID starting with ffb93b65511618b4f5a2ee6abdfeb5a0e117d33480ad828b44e56f224d17cb01 not found: ID does not exist" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.028881 4821 scope.go:117] "RemoveContainer" containerID="7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1" Sep 30 17:31:58 crc kubenswrapper[4821]: E0930 17:31:58.029500 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1\": container with ID starting with 7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1 not found: ID does not exist" containerID="7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.029521 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1"} err="failed to get container status \"7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1\": rpc error: code = NotFound desc = could not find container \"7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1\": container with ID starting with 7818346a6b16f5f7fd1e20ef72515df5983bc0e6703da788340ab964bcf566e1 not found: ID does not exist" Sep 30 17:31:58 crc kubenswrapper[4821]: I0930 17:31:58.718330 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" path="/var/lib/kubelet/pods/0cccfa37-4baf-47a6-b302-86a07cae2878/volumes" Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.065700 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-676jk"] Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.076401 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zx2qv"] Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.084922 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zx2qv"] Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.092318 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-676jk"] Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.719516 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="824f4509-b658-47ca-90b7-3725a3839996" path="/var/lib/kubelet/pods/824f4509-b658-47ca-90b7-3725a3839996/volumes" Sep 30 17:32:02 crc kubenswrapper[4821]: I0930 17:32:02.721294 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0a8889-b876-4532-9c20-c5a0ecae9dd4" path="/var/lib/kubelet/pods/fc0a8889-b876-4532-9c20-c5a0ecae9dd4/volumes" Sep 30 17:32:11 crc kubenswrapper[4821]: I0930 17:32:11.706853 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:32:11 crc kubenswrapper[4821]: E0930 17:32:11.707641 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:32:22 crc kubenswrapper[4821]: I0930 17:32:22.708124 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:32:22 crc kubenswrapper[4821]: E0930 17:32:22.709204 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:32:29 crc kubenswrapper[4821]: I0930 17:32:29.384695 4821 scope.go:117] "RemoveContainer" containerID="9b7360409521062fbccdc80fac63fafb12394b9e4548a3cff92d40c80ca573ac" Sep 30 17:32:29 crc kubenswrapper[4821]: I0930 17:32:29.417406 4821 scope.go:117] "RemoveContainer" containerID="236471828c46e013602f99e03dbcaa478a1b95fd5fc02c719efb4b3e22d0e06f" Sep 30 17:32:29 crc kubenswrapper[4821]: I0930 17:32:29.476204 4821 scope.go:117] "RemoveContainer" containerID="242ec62677dbf6170eec9e02269478a326440a22aa2a359db5e6bb5006472051" Sep 30 17:32:35 crc kubenswrapper[4821]: I0930 17:32:35.707546 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:32:35 crc kubenswrapper[4821]: E0930 17:32:35.708291 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:32:48 crc kubenswrapper[4821]: I0930 17:32:48.051456 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4v7j"] Sep 30 17:32:48 crc kubenswrapper[4821]: I0930 17:32:48.067722 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4v7j"] Sep 30 17:32:48 crc kubenswrapper[4821]: I0930 17:32:48.720227 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c003c616-ec69-450f-b2bd-0a9fb2d84cfa" path="/var/lib/kubelet/pods/c003c616-ec69-450f-b2bd-0a9fb2d84cfa/volumes" Sep 30 17:32:50 crc kubenswrapper[4821]: I0930 17:32:50.707449 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:32:50 crc kubenswrapper[4821]: E0930 17:32:50.707997 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:33:01 crc kubenswrapper[4821]: I0930 17:33:01.707026 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:33:01 crc kubenswrapper[4821]: E0930 17:33:01.707947 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:33:12 crc kubenswrapper[4821]: I0930 17:33:12.706518 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:33:12 crc kubenswrapper[4821]: E0930 17:33:12.707352 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:33:24 crc kubenswrapper[4821]: I0930 17:33:24.714246 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:33:24 crc kubenswrapper[4821]: E0930 17:33:24.715259 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:33:29 crc kubenswrapper[4821]: I0930 17:33:29.636790 4821 scope.go:117] "RemoveContainer" containerID="3ddc37035b4eb39ee3825f171c45ee01ab6ca5795e70581995d89c601eb554b3" Sep 30 17:33:39 crc kubenswrapper[4821]: I0930 17:33:39.707314 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:33:39 crc kubenswrapper[4821]: E0930 17:33:39.709204 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:33:52 crc kubenswrapper[4821]: I0930 17:33:52.706946 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:33:53 crc kubenswrapper[4821]: I0930 17:33:53.944206 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f"} Sep 30 17:36:19 crc kubenswrapper[4821]: I0930 17:36:19.349807 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:36:19 crc kubenswrapper[4821]: I0930 17:36:19.351237 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:36:49 crc kubenswrapper[4821]: I0930 17:36:49.349945 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:36:49 crc kubenswrapper[4821]: I0930 17:36:49.350527 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.350386 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.350897 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.350947 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.351742 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.351802 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f" gracePeriod=600 Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.658553 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f" exitCode=0 Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.658959 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f"} Sep 30 17:37:19 crc kubenswrapper[4821]: I0930 17:37:19.659006 4821 scope.go:117] "RemoveContainer" containerID="3b2af72a05496030d699d687642569746e55f2a2d5c57c654aa59598f54ecc5d" Sep 30 17:37:20 crc kubenswrapper[4821]: I0930 17:37:20.670576 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3"} Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.004902 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.005979 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="extract-utilities" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.005997 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="extract-utilities" Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.006027 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006036 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.006068 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="extract-utilities" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006077 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="extract-utilities" Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.006116 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006125 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.006144 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="extract-content" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006151 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="extract-content" Sep 30 17:37:55 crc kubenswrapper[4821]: E0930 17:37:55.006164 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="extract-content" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006172 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="extract-content" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006370 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0aa3f7c-14ef-4a65-ad7c-c2b8d2307ef4" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.006385 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cccfa37-4baf-47a6-b302-86a07cae2878" containerName="registry-server" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.007969 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.020523 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.162730 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.163071 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8796t\" (UniqueName: \"kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.163125 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.264189 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8796t\" (UniqueName: \"kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.264619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.264909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.265141 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.265356 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.286413 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8796t\" (UniqueName: \"kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t\") pod \"community-operators-98zr2\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.332457 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.916284 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:37:55 crc kubenswrapper[4821]: I0930 17:37:55.983353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerStarted","Data":"6cc60da7d94cc2fbddd649c2e51aa2880bdbedcbc725f451d03c6b3485a64846"} Sep 30 17:37:56 crc kubenswrapper[4821]: I0930 17:37:56.993765 4821 generic.go:334] "Generic (PLEG): container finished" podID="13131b24-ebca-412f-9926-9fae23180ae7" containerID="78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084" exitCode=0 Sep 30 17:37:56 crc kubenswrapper[4821]: I0930 17:37:56.993987 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerDied","Data":"78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084"} Sep 30 17:37:56 crc kubenswrapper[4821]: I0930 17:37:56.997599 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:37:58 crc kubenswrapper[4821]: I0930 17:37:58.002345 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerStarted","Data":"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f"} Sep 30 17:38:00 crc kubenswrapper[4821]: I0930 17:38:00.021274 4821 generic.go:334] "Generic (PLEG): container finished" podID="13131b24-ebca-412f-9926-9fae23180ae7" containerID="152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f" exitCode=0 Sep 30 17:38:00 crc kubenswrapper[4821]: I0930 17:38:00.021331 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerDied","Data":"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f"} Sep 30 17:38:01 crc kubenswrapper[4821]: I0930 17:38:01.033300 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerStarted","Data":"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14"} Sep 30 17:38:01 crc kubenswrapper[4821]: I0930 17:38:01.066049 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98zr2" podStartSLOduration=3.510168611 podStartE2EDuration="7.066029071s" podCreationTimestamp="2025-09-30 17:37:54 +0000 UTC" firstStartedPulling="2025-09-30 17:37:56.997186919 +0000 UTC m=+2072.902232893" lastFinishedPulling="2025-09-30 17:38:00.553047399 +0000 UTC m=+2076.458093353" observedRunningTime="2025-09-30 17:38:01.062649398 +0000 UTC m=+2076.967695342" watchObservedRunningTime="2025-09-30 17:38:01.066029071 +0000 UTC m=+2076.971075025" Sep 30 17:38:05 crc kubenswrapper[4821]: I0930 17:38:05.333313 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:05 crc kubenswrapper[4821]: I0930 17:38:05.333989 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:05 crc kubenswrapper[4821]: I0930 17:38:05.381429 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:06 crc kubenswrapper[4821]: I0930 17:38:06.112890 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:06 crc kubenswrapper[4821]: I0930 17:38:06.163744 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.093204 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98zr2" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="registry-server" containerID="cri-o://6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14" gracePeriod=2 Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.577209 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.641993 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities\") pod \"13131b24-ebca-412f-9926-9fae23180ae7\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.642058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content\") pod \"13131b24-ebca-412f-9926-9fae23180ae7\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.642241 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8796t\" (UniqueName: \"kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t\") pod \"13131b24-ebca-412f-9926-9fae23180ae7\" (UID: \"13131b24-ebca-412f-9926-9fae23180ae7\") " Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.643505 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities" (OuterVolumeSpecName: "utilities") pod "13131b24-ebca-412f-9926-9fae23180ae7" (UID: "13131b24-ebca-412f-9926-9fae23180ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.648532 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t" (OuterVolumeSpecName: "kube-api-access-8796t") pod "13131b24-ebca-412f-9926-9fae23180ae7" (UID: "13131b24-ebca-412f-9926-9fae23180ae7"). InnerVolumeSpecName "kube-api-access-8796t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.688842 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13131b24-ebca-412f-9926-9fae23180ae7" (UID: "13131b24-ebca-412f-9926-9fae23180ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.743831 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8796t\" (UniqueName: \"kubernetes.io/projected/13131b24-ebca-412f-9926-9fae23180ae7-kube-api-access-8796t\") on node \"crc\" DevicePath \"\"" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.743860 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:38:08 crc kubenswrapper[4821]: I0930 17:38:08.743869 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13131b24-ebca-412f-9926-9fae23180ae7-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.107010 4821 generic.go:334] "Generic (PLEG): container finished" podID="13131b24-ebca-412f-9926-9fae23180ae7" containerID="6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14" exitCode=0 Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.107097 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98zr2" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.107144 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerDied","Data":"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14"} Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.107636 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98zr2" event={"ID":"13131b24-ebca-412f-9926-9fae23180ae7","Type":"ContainerDied","Data":"6cc60da7d94cc2fbddd649c2e51aa2880bdbedcbc725f451d03c6b3485a64846"} Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.107676 4821 scope.go:117] "RemoveContainer" containerID="6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.139685 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.149596 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98zr2"] Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.154794 4821 scope.go:117] "RemoveContainer" containerID="152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.188139 4821 scope.go:117] "RemoveContainer" containerID="78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.216307 4821 scope.go:117] "RemoveContainer" containerID="6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14" Sep 30 17:38:09 crc kubenswrapper[4821]: E0930 17:38:09.216815 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14\": container with ID starting with 6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14 not found: ID does not exist" containerID="6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.216846 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14"} err="failed to get container status \"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14\": rpc error: code = NotFound desc = could not find container \"6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14\": container with ID starting with 6be698b2978ad73e02e41e119b1adc427a5af4c6f8f10b0923117ca3d52e2c14 not found: ID does not exist" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.216868 4821 scope.go:117] "RemoveContainer" containerID="152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f" Sep 30 17:38:09 crc kubenswrapper[4821]: E0930 17:38:09.217231 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f\": container with ID starting with 152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f not found: ID does not exist" containerID="152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.217266 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f"} err="failed to get container status \"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f\": rpc error: code = NotFound desc = could not find container \"152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f\": container with ID starting with 152d3a8c86cc754224ecc5350a9732292fe79b2d20db8876af75ea468de4409f not found: ID does not exist" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.217285 4821 scope.go:117] "RemoveContainer" containerID="78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084" Sep 30 17:38:09 crc kubenswrapper[4821]: E0930 17:38:09.217647 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084\": container with ID starting with 78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084 not found: ID does not exist" containerID="78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084" Sep 30 17:38:09 crc kubenswrapper[4821]: I0930 17:38:09.217685 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084"} err="failed to get container status \"78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084\": rpc error: code = NotFound desc = could not find container \"78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084\": container with ID starting with 78bd428233adda1d4f0c5210a629fb4ef105ed0105788426d0b3808043aa3084 not found: ID does not exist" Sep 30 17:38:10 crc kubenswrapper[4821]: I0930 17:38:10.722235 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13131b24-ebca-412f-9926-9fae23180ae7" path="/var/lib/kubelet/pods/13131b24-ebca-412f-9926-9fae23180ae7/volumes" Sep 30 17:39:19 crc kubenswrapper[4821]: I0930 17:39:19.349565 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:39:19 crc kubenswrapper[4821]: I0930 17:39:19.350239 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:39:49 crc kubenswrapper[4821]: I0930 17:39:49.350243 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:39:49 crc kubenswrapper[4821]: I0930 17:39:49.350677 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.112990 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:17 crc kubenswrapper[4821]: E0930 17:40:17.121015 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="extract-content" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.121058 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="extract-content" Sep 30 17:40:17 crc kubenswrapper[4821]: E0930 17:40:17.121113 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="extract-utilities" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.121123 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="extract-utilities" Sep 30 17:40:17 crc kubenswrapper[4821]: E0930 17:40:17.121140 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="registry-server" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.121151 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="registry-server" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.121401 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="13131b24-ebca-412f-9926-9fae23180ae7" containerName="registry-server" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.123233 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.160278 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.304872 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.305189 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qbv\" (UniqueName: \"kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.305213 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.406678 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qbv\" (UniqueName: \"kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.406728 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.406818 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.407383 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.407397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.430548 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qbv\" (UniqueName: \"kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv\") pod \"redhat-operators-q4mjd\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.459411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:17 crc kubenswrapper[4821]: I0930 17:40:17.974709 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:18 crc kubenswrapper[4821]: I0930 17:40:18.170912 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerStarted","Data":"626f003f1e7f1568957df14ec7546a9d925eac2d0d02419d80e27711e6b41697"} Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.179123 4821 generic.go:334] "Generic (PLEG): container finished" podID="9906543e-1e49-4f89-a034-a977c384ee07" containerID="778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c" exitCode=0 Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.179211 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerDied","Data":"778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c"} Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.349469 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.349519 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.349555 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.350157 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:40:19 crc kubenswrapper[4821]: I0930 17:40:19.350223 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" gracePeriod=600 Sep 30 17:40:19 crc kubenswrapper[4821]: E0930 17:40:19.468950 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:40:20 crc kubenswrapper[4821]: I0930 17:40:20.188159 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerStarted","Data":"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e"} Sep 30 17:40:20 crc kubenswrapper[4821]: I0930 17:40:20.190740 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" exitCode=0 Sep 30 17:40:20 crc kubenswrapper[4821]: I0930 17:40:20.190770 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3"} Sep 30 17:40:20 crc kubenswrapper[4821]: I0930 17:40:20.190796 4821 scope.go:117] "RemoveContainer" containerID="1f5607e4aadabcca82a33d9e2689d647522a7cea0172a04f8e541b17ae0c348f" Sep 30 17:40:20 crc kubenswrapper[4821]: I0930 17:40:20.191195 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:40:20 crc kubenswrapper[4821]: E0930 17:40:20.191408 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:40:21 crc kubenswrapper[4821]: I0930 17:40:21.199447 4821 generic.go:334] "Generic (PLEG): container finished" podID="9906543e-1e49-4f89-a034-a977c384ee07" containerID="f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e" exitCode=0 Sep 30 17:40:21 crc kubenswrapper[4821]: I0930 17:40:21.199507 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerDied","Data":"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e"} Sep 30 17:40:22 crc kubenswrapper[4821]: I0930 17:40:22.212757 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerStarted","Data":"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9"} Sep 30 17:40:27 crc kubenswrapper[4821]: I0930 17:40:27.459722 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:27 crc kubenswrapper[4821]: I0930 17:40:27.460261 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:27 crc kubenswrapper[4821]: I0930 17:40:27.511728 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:27 crc kubenswrapper[4821]: I0930 17:40:27.537035 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q4mjd" podStartSLOduration=7.903647104 podStartE2EDuration="10.537016641s" podCreationTimestamp="2025-09-30 17:40:17 +0000 UTC" firstStartedPulling="2025-09-30 17:40:19.182980818 +0000 UTC m=+2215.088026752" lastFinishedPulling="2025-09-30 17:40:21.816350345 +0000 UTC m=+2217.721396289" observedRunningTime="2025-09-30 17:40:22.235135446 +0000 UTC m=+2218.140181390" watchObservedRunningTime="2025-09-30 17:40:27.537016641 +0000 UTC m=+2223.442062585" Sep 30 17:40:28 crc kubenswrapper[4821]: I0930 17:40:28.334450 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:28 crc kubenswrapper[4821]: I0930 17:40:28.404016 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.296616 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q4mjd" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="registry-server" containerID="cri-o://0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9" gracePeriod=2 Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.760519 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.946848 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qbv\" (UniqueName: \"kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv\") pod \"9906543e-1e49-4f89-a034-a977c384ee07\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.947076 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities\") pod \"9906543e-1e49-4f89-a034-a977c384ee07\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.947176 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content\") pod \"9906543e-1e49-4f89-a034-a977c384ee07\" (UID: \"9906543e-1e49-4f89-a034-a977c384ee07\") " Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.948193 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities" (OuterVolumeSpecName: "utilities") pod "9906543e-1e49-4f89-a034-a977c384ee07" (UID: "9906543e-1e49-4f89-a034-a977c384ee07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:30 crc kubenswrapper[4821]: I0930 17:40:30.958503 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv" (OuterVolumeSpecName: "kube-api-access-t7qbv") pod "9906543e-1e49-4f89-a034-a977c384ee07" (UID: "9906543e-1e49-4f89-a034-a977c384ee07"). InnerVolumeSpecName "kube-api-access-t7qbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.032513 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9906543e-1e49-4f89-a034-a977c384ee07" (UID: "9906543e-1e49-4f89-a034-a977c384ee07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.049817 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.049960 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9906543e-1e49-4f89-a034-a977c384ee07-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.050031 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qbv\" (UniqueName: \"kubernetes.io/projected/9906543e-1e49-4f89-a034-a977c384ee07-kube-api-access-t7qbv\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.313299 4821 generic.go:334] "Generic (PLEG): container finished" podID="9906543e-1e49-4f89-a034-a977c384ee07" containerID="0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9" exitCode=0 Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.313346 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerDied","Data":"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9"} Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.313373 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4mjd" event={"ID":"9906543e-1e49-4f89-a034-a977c384ee07","Type":"ContainerDied","Data":"626f003f1e7f1568957df14ec7546a9d925eac2d0d02419d80e27711e6b41697"} Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.313389 4821 scope.go:117] "RemoveContainer" containerID="0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.313383 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4mjd" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.330790 4821 scope.go:117] "RemoveContainer" containerID="f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.357152 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.367299 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q4mjd"] Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.367415 4821 scope.go:117] "RemoveContainer" containerID="778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.394995 4821 scope.go:117] "RemoveContainer" containerID="0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9" Sep 30 17:40:31 crc kubenswrapper[4821]: E0930 17:40:31.395545 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9\": container with ID starting with 0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9 not found: ID does not exist" containerID="0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.395595 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9"} err="failed to get container status \"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9\": rpc error: code = NotFound desc = could not find container \"0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9\": container with ID starting with 0cfdb4dc2be7b7456295bc8ba7a36310bdb9fac57bcfc5eb9b10fb4dc2cba0d9 not found: ID does not exist" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.395628 4821 scope.go:117] "RemoveContainer" containerID="f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e" Sep 30 17:40:31 crc kubenswrapper[4821]: E0930 17:40:31.395937 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e\": container with ID starting with f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e not found: ID does not exist" containerID="f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.395969 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e"} err="failed to get container status \"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e\": rpc error: code = NotFound desc = could not find container \"f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e\": container with ID starting with f3bf2726eb2c95f8e25bfbc466bd4c6312de02d0a63f5d450fade6405f2f179e not found: ID does not exist" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.395995 4821 scope.go:117] "RemoveContainer" containerID="778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c" Sep 30 17:40:31 crc kubenswrapper[4821]: E0930 17:40:31.396269 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c\": container with ID starting with 778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c not found: ID does not exist" containerID="778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c" Sep 30 17:40:31 crc kubenswrapper[4821]: I0930 17:40:31.396313 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c"} err="failed to get container status \"778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c\": rpc error: code = NotFound desc = could not find container \"778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c\": container with ID starting with 778cc0397d424052d19b05d7104143fd34d1a039859abde82c221a1f71133f1c not found: ID does not exist" Sep 30 17:40:32 crc kubenswrapper[4821]: I0930 17:40:32.718492 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9906543e-1e49-4f89-a034-a977c384ee07" path="/var/lib/kubelet/pods/9906543e-1e49-4f89-a034-a977c384ee07/volumes" Sep 30 17:40:34 crc kubenswrapper[4821]: I0930 17:40:34.712263 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:40:34 crc kubenswrapper[4821]: E0930 17:40:34.712792 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:40:45 crc kubenswrapper[4821]: I0930 17:40:45.708214 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:40:45 crc kubenswrapper[4821]: E0930 17:40:45.708988 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:00 crc kubenswrapper[4821]: I0930 17:41:00.707233 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:41:00 crc kubenswrapper[4821]: E0930 17:41:00.707971 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:15 crc kubenswrapper[4821]: I0930 17:41:15.707224 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:41:15 crc kubenswrapper[4821]: E0930 17:41:15.708193 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:28 crc kubenswrapper[4821]: I0930 17:41:28.706408 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:41:28 crc kubenswrapper[4821]: E0930 17:41:28.706859 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:41 crc kubenswrapper[4821]: I0930 17:41:41.707549 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:41:41 crc kubenswrapper[4821]: E0930 17:41:41.708265 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.259419 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:42 crc kubenswrapper[4821]: E0930 17:41:42.259839 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="extract-content" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.259863 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="extract-content" Sep 30 17:41:42 crc kubenswrapper[4821]: E0930 17:41:42.259878 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="registry-server" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.259886 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="registry-server" Sep 30 17:41:42 crc kubenswrapper[4821]: E0930 17:41:42.259916 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="extract-utilities" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.259927 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="extract-utilities" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.260220 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9906543e-1e49-4f89-a034-a977c384ee07" containerName="registry-server" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.261919 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.284538 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.426679 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.427105 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stswj\" (UniqueName: \"kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.427159 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.529339 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stswj\" (UniqueName: \"kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.529703 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.529901 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.530058 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.530249 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.551181 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stswj\" (UniqueName: \"kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj\") pod \"redhat-marketplace-rr4nr\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:42 crc kubenswrapper[4821]: I0930 17:41:42.583810 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:43 crc kubenswrapper[4821]: I0930 17:41:43.047242 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:43 crc kubenswrapper[4821]: W0930 17:41:43.056477 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod592efc8f_0a7b_4c35_adfb_89f9b46331d1.slice/crio-5a5beb2b1c51b97ee57d46257f12b86d1998b57a6f46ebffbfe06a57c5f9be38 WatchSource:0}: Error finding container 5a5beb2b1c51b97ee57d46257f12b86d1998b57a6f46ebffbfe06a57c5f9be38: Status 404 returned error can't find the container with id 5a5beb2b1c51b97ee57d46257f12b86d1998b57a6f46ebffbfe06a57c5f9be38 Sep 30 17:41:43 crc kubenswrapper[4821]: I0930 17:41:43.983319 4821 generic.go:334] "Generic (PLEG): container finished" podID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerID="aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b" exitCode=0 Sep 30 17:41:43 crc kubenswrapper[4821]: I0930 17:41:43.983679 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerDied","Data":"aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b"} Sep 30 17:41:43 crc kubenswrapper[4821]: I0930 17:41:43.983713 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerStarted","Data":"5a5beb2b1c51b97ee57d46257f12b86d1998b57a6f46ebffbfe06a57c5f9be38"} Sep 30 17:41:44 crc kubenswrapper[4821]: I0930 17:41:44.994285 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerStarted","Data":"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a"} Sep 30 17:41:46 crc kubenswrapper[4821]: I0930 17:41:46.002119 4821 generic.go:334] "Generic (PLEG): container finished" podID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerID="24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a" exitCode=0 Sep 30 17:41:46 crc kubenswrapper[4821]: I0930 17:41:46.002177 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerDied","Data":"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a"} Sep 30 17:41:47 crc kubenswrapper[4821]: I0930 17:41:47.013075 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerStarted","Data":"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802"} Sep 30 17:41:47 crc kubenswrapper[4821]: I0930 17:41:47.036581 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rr4nr" podStartSLOduration=2.5144627980000003 podStartE2EDuration="5.036561307s" podCreationTimestamp="2025-09-30 17:41:42 +0000 UTC" firstStartedPulling="2025-09-30 17:41:43.985957108 +0000 UTC m=+2299.891003052" lastFinishedPulling="2025-09-30 17:41:46.508055617 +0000 UTC m=+2302.413101561" observedRunningTime="2025-09-30 17:41:47.029493982 +0000 UTC m=+2302.934539936" watchObservedRunningTime="2025-09-30 17:41:47.036561307 +0000 UTC m=+2302.941607261" Sep 30 17:41:52 crc kubenswrapper[4821]: I0930 17:41:52.584774 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:52 crc kubenswrapper[4821]: I0930 17:41:52.585272 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:52 crc kubenswrapper[4821]: I0930 17:41:52.660339 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:53 crc kubenswrapper[4821]: I0930 17:41:53.108169 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:53 crc kubenswrapper[4821]: I0930 17:41:53.154703 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:54 crc kubenswrapper[4821]: I0930 17:41:54.711680 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:41:54 crc kubenswrapper[4821]: E0930 17:41:54.712200 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.078519 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rr4nr" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="registry-server" containerID="cri-o://4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802" gracePeriod=2 Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.547859 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.556344 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities\") pod \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.556391 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stswj\" (UniqueName: \"kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj\") pod \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.556660 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content\") pod \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\" (UID: \"592efc8f-0a7b-4c35-adfb-89f9b46331d1\") " Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.558544 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities" (OuterVolumeSpecName: "utilities") pod "592efc8f-0a7b-4c35-adfb-89f9b46331d1" (UID: "592efc8f-0a7b-4c35-adfb-89f9b46331d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.563994 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj" (OuterVolumeSpecName: "kube-api-access-stswj") pod "592efc8f-0a7b-4c35-adfb-89f9b46331d1" (UID: "592efc8f-0a7b-4c35-adfb-89f9b46331d1"). InnerVolumeSpecName "kube-api-access-stswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.580682 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592efc8f-0a7b-4c35-adfb-89f9b46331d1" (UID: "592efc8f-0a7b-4c35-adfb-89f9b46331d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.658748 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.658797 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592efc8f-0a7b-4c35-adfb-89f9b46331d1-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:41:55 crc kubenswrapper[4821]: I0930 17:41:55.658815 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stswj\" (UniqueName: \"kubernetes.io/projected/592efc8f-0a7b-4c35-adfb-89f9b46331d1-kube-api-access-stswj\") on node \"crc\" DevicePath \"\"" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.087644 4821 generic.go:334] "Generic (PLEG): container finished" podID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerID="4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802" exitCode=0 Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.087956 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerDied","Data":"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802"} Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.087989 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr4nr" event={"ID":"592efc8f-0a7b-4c35-adfb-89f9b46331d1","Type":"ContainerDied","Data":"5a5beb2b1c51b97ee57d46257f12b86d1998b57a6f46ebffbfe06a57c5f9be38"} Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.088010 4821 scope.go:117] "RemoveContainer" containerID="4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.088185 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr4nr" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.127889 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.128796 4821 scope.go:117] "RemoveContainer" containerID="24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.143854 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr4nr"] Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.152546 4821 scope.go:117] "RemoveContainer" containerID="aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.186353 4821 scope.go:117] "RemoveContainer" containerID="4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802" Sep 30 17:41:56 crc kubenswrapper[4821]: E0930 17:41:56.186878 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802\": container with ID starting with 4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802 not found: ID does not exist" containerID="4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.186933 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802"} err="failed to get container status \"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802\": rpc error: code = NotFound desc = could not find container \"4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802\": container with ID starting with 4602ca5c8c87f5f68949f4e48539499f4c211edbcfcde3989cff12e3391a1802 not found: ID does not exist" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.186967 4821 scope.go:117] "RemoveContainer" containerID="24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a" Sep 30 17:41:56 crc kubenswrapper[4821]: E0930 17:41:56.187270 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a\": container with ID starting with 24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a not found: ID does not exist" containerID="24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.187296 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a"} err="failed to get container status \"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a\": rpc error: code = NotFound desc = could not find container \"24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a\": container with ID starting with 24da2d21a906a81859f8103f4409f270d9336929d45ffd9a053eaa9f9db1189a not found: ID does not exist" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.187319 4821 scope.go:117] "RemoveContainer" containerID="aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b" Sep 30 17:41:56 crc kubenswrapper[4821]: E0930 17:41:56.187586 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b\": container with ID starting with aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b not found: ID does not exist" containerID="aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.187616 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b"} err="failed to get container status \"aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b\": rpc error: code = NotFound desc = could not find container \"aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b\": container with ID starting with aa784938c9931902cc6b79c2c676a5e4be48e00bd1e1017fbea4d572937a249b not found: ID does not exist" Sep 30 17:41:56 crc kubenswrapper[4821]: I0930 17:41:56.722150 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" path="/var/lib/kubelet/pods/592efc8f-0a7b-4c35-adfb-89f9b46331d1/volumes" Sep 30 17:42:06 crc kubenswrapper[4821]: I0930 17:42:06.708267 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:42:06 crc kubenswrapper[4821]: E0930 17:42:06.709235 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:42:21 crc kubenswrapper[4821]: I0930 17:42:21.708053 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:42:21 crc kubenswrapper[4821]: E0930 17:42:21.710072 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:42:34 crc kubenswrapper[4821]: I0930 17:42:34.712245 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:42:34 crc kubenswrapper[4821]: E0930 17:42:34.715378 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:42:48 crc kubenswrapper[4821]: I0930 17:42:48.707822 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:42:48 crc kubenswrapper[4821]: E0930 17:42:48.708626 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.389398 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:42:59 crc kubenswrapper[4821]: E0930 17:42:59.392533 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="registry-server" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.392570 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="registry-server" Sep 30 17:42:59 crc kubenswrapper[4821]: E0930 17:42:59.392593 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="extract-content" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.392602 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="extract-content" Sep 30 17:42:59 crc kubenswrapper[4821]: E0930 17:42:59.392622 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="extract-utilities" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.392631 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="extract-utilities" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.392908 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="592efc8f-0a7b-4c35-adfb-89f9b46331d1" containerName="registry-server" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.394745 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.400845 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.584028 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlhw\" (UniqueName: \"kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.584106 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.584140 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.685543 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlhw\" (UniqueName: \"kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.685613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.685665 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.686267 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.686366 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.703364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlhw\" (UniqueName: \"kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw\") pod \"certified-operators-vpp47\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:42:59 crc kubenswrapper[4821]: I0930 17:42:59.723910 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:00 crc kubenswrapper[4821]: I0930 17:43:00.733833 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:43:01 crc kubenswrapper[4821]: I0930 17:43:01.681399 4821 generic.go:334] "Generic (PLEG): container finished" podID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerID="2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9" exitCode=0 Sep 30 17:43:01 crc kubenswrapper[4821]: I0930 17:43:01.681467 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerDied","Data":"2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9"} Sep 30 17:43:01 crc kubenswrapper[4821]: I0930 17:43:01.682007 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerStarted","Data":"4021e50ceea71225bec3b941f176f1a135b8be758c3ba3cfcd4a29d65bff3f4d"} Sep 30 17:43:01 crc kubenswrapper[4821]: I0930 17:43:01.683405 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:43:03 crc kubenswrapper[4821]: I0930 17:43:03.706465 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:43:03 crc kubenswrapper[4821]: E0930 17:43:03.707234 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:43:03 crc kubenswrapper[4821]: I0930 17:43:03.708345 4821 generic.go:334] "Generic (PLEG): container finished" podID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerID="a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8" exitCode=0 Sep 30 17:43:03 crc kubenswrapper[4821]: I0930 17:43:03.708387 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerDied","Data":"a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8"} Sep 30 17:43:05 crc kubenswrapper[4821]: I0930 17:43:05.723734 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerStarted","Data":"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93"} Sep 30 17:43:05 crc kubenswrapper[4821]: I0930 17:43:05.741612 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vpp47" podStartSLOduration=3.893916461 podStartE2EDuration="6.74158774s" podCreationTimestamp="2025-09-30 17:42:59 +0000 UTC" firstStartedPulling="2025-09-30 17:43:01.68319302 +0000 UTC m=+2377.588238964" lastFinishedPulling="2025-09-30 17:43:04.530864289 +0000 UTC m=+2380.435910243" observedRunningTime="2025-09-30 17:43:05.737467609 +0000 UTC m=+2381.642513563" watchObservedRunningTime="2025-09-30 17:43:05.74158774 +0000 UTC m=+2381.646633684" Sep 30 17:43:09 crc kubenswrapper[4821]: I0930 17:43:09.724498 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:09 crc kubenswrapper[4821]: I0930 17:43:09.724884 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:09 crc kubenswrapper[4821]: I0930 17:43:09.800924 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:09 crc kubenswrapper[4821]: I0930 17:43:09.871151 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:10 crc kubenswrapper[4821]: I0930 17:43:10.061547 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:43:11 crc kubenswrapper[4821]: I0930 17:43:11.771458 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vpp47" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="registry-server" containerID="cri-o://b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93" gracePeriod=2 Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.167513 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.319272 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content\") pod \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.319430 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities\") pod \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.319497 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rlhw\" (UniqueName: \"kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw\") pod \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\" (UID: \"c26f1f3e-6677-4b03-9b9d-3b441c621d77\") " Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.321815 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities" (OuterVolumeSpecName: "utilities") pod "c26f1f3e-6677-4b03-9b9d-3b441c621d77" (UID: "c26f1f3e-6677-4b03-9b9d-3b441c621d77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.325570 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw" (OuterVolumeSpecName: "kube-api-access-2rlhw") pod "c26f1f3e-6677-4b03-9b9d-3b441c621d77" (UID: "c26f1f3e-6677-4b03-9b9d-3b441c621d77"). InnerVolumeSpecName "kube-api-access-2rlhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.372390 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c26f1f3e-6677-4b03-9b9d-3b441c621d77" (UID: "c26f1f3e-6677-4b03-9b9d-3b441c621d77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.421317 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rlhw\" (UniqueName: \"kubernetes.io/projected/c26f1f3e-6677-4b03-9b9d-3b441c621d77-kube-api-access-2rlhw\") on node \"crc\" DevicePath \"\"" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.421346 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.421359 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26f1f3e-6677-4b03-9b9d-3b441c621d77-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.789123 4821 generic.go:334] "Generic (PLEG): container finished" podID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerID="b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93" exitCode=0 Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.789169 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpp47" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.789187 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerDied","Data":"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93"} Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.789669 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpp47" event={"ID":"c26f1f3e-6677-4b03-9b9d-3b441c621d77","Type":"ContainerDied","Data":"4021e50ceea71225bec3b941f176f1a135b8be758c3ba3cfcd4a29d65bff3f4d"} Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.789691 4821 scope.go:117] "RemoveContainer" containerID="b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.822048 4821 scope.go:117] "RemoveContainer" containerID="a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.829771 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.846164 4821 scope.go:117] "RemoveContainer" containerID="2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.846169 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vpp47"] Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.907327 4821 scope.go:117] "RemoveContainer" containerID="b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93" Sep 30 17:43:12 crc kubenswrapper[4821]: E0930 17:43:12.907811 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93\": container with ID starting with b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93 not found: ID does not exist" containerID="b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.907852 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93"} err="failed to get container status \"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93\": rpc error: code = NotFound desc = could not find container \"b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93\": container with ID starting with b617ac7671fd46da1a97a5a5ba6811c04654a8f366fa9bb4c74ffc932aa48f93 not found: ID does not exist" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.907877 4821 scope.go:117] "RemoveContainer" containerID="a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8" Sep 30 17:43:12 crc kubenswrapper[4821]: E0930 17:43:12.908127 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8\": container with ID starting with a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8 not found: ID does not exist" containerID="a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.908151 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8"} err="failed to get container status \"a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8\": rpc error: code = NotFound desc = could not find container \"a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8\": container with ID starting with a93f69d3cfb9fa7fb1971c3ebe6d53038d81ca2af6faccf197eb03708131f3d8 not found: ID does not exist" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.908169 4821 scope.go:117] "RemoveContainer" containerID="2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9" Sep 30 17:43:12 crc kubenswrapper[4821]: E0930 17:43:12.908910 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9\": container with ID starting with 2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9 not found: ID does not exist" containerID="2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9" Sep 30 17:43:12 crc kubenswrapper[4821]: I0930 17:43:12.908987 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9"} err="failed to get container status \"2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9\": rpc error: code = NotFound desc = could not find container \"2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9\": container with ID starting with 2a7644972e0a1ae24fca8bed9b16b90e98cd3fcb224e3503130a7719cfc3b9d9 not found: ID does not exist" Sep 30 17:43:14 crc kubenswrapper[4821]: I0930 17:43:14.718692 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" path="/var/lib/kubelet/pods/c26f1f3e-6677-4b03-9b9d-3b441c621d77/volumes" Sep 30 17:43:17 crc kubenswrapper[4821]: I0930 17:43:17.706982 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:43:17 crc kubenswrapper[4821]: E0930 17:43:17.707527 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:43:30 crc kubenswrapper[4821]: I0930 17:43:30.706769 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:43:30 crc kubenswrapper[4821]: E0930 17:43:30.707656 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:43:43 crc kubenswrapper[4821]: I0930 17:43:43.706927 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:43:43 crc kubenswrapper[4821]: E0930 17:43:43.707795 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:43:56 crc kubenswrapper[4821]: I0930 17:43:56.707504 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:43:56 crc kubenswrapper[4821]: E0930 17:43:56.708258 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:44:11 crc kubenswrapper[4821]: I0930 17:44:11.707664 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:44:11 crc kubenswrapper[4821]: E0930 17:44:11.709467 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:44:24 crc kubenswrapper[4821]: I0930 17:44:24.720218 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:44:24 crc kubenswrapper[4821]: E0930 17:44:24.721463 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:44:38 crc kubenswrapper[4821]: I0930 17:44:38.707516 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:44:38 crc kubenswrapper[4821]: E0930 17:44:38.708557 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:44:50 crc kubenswrapper[4821]: I0930 17:44:50.706862 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:44:50 crc kubenswrapper[4821]: E0930 17:44:50.707566 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.140162 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h"] Sep 30 17:45:00 crc kubenswrapper[4821]: E0930 17:45:00.141058 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.141073 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4821]: E0930 17:45:00.141117 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="extract-content" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.141125 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="extract-content" Sep 30 17:45:00 crc kubenswrapper[4821]: E0930 17:45:00.141143 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="extract-utilities" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.141150 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="extract-utilities" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.141312 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26f1f3e-6677-4b03-9b9d-3b441c621d77" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.141949 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.144186 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.144409 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.157304 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h"] Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.240050 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fld\" (UniqueName: \"kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.240134 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.240268 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.342325 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.342387 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fld\" (UniqueName: \"kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.342420 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.344218 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.352606 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.359818 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fld\" (UniqueName: \"kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld\") pod \"collect-profiles-29320905-6mt2h\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.463133 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:00 crc kubenswrapper[4821]: I0930 17:45:00.971555 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h"] Sep 30 17:45:01 crc kubenswrapper[4821]: I0930 17:45:01.640482 4821 generic.go:334] "Generic (PLEG): container finished" podID="6c81c7d5-179b-48c0-8d6c-7fd0217085c0" containerID="4f130e822734fa3d0c266302927ca67e5e786cd7101f1ee4e92e3d2fad205859" exitCode=0 Sep 30 17:45:01 crc kubenswrapper[4821]: I0930 17:45:01.640534 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" event={"ID":"6c81c7d5-179b-48c0-8d6c-7fd0217085c0","Type":"ContainerDied","Data":"4f130e822734fa3d0c266302927ca67e5e786cd7101f1ee4e92e3d2fad205859"} Sep 30 17:45:01 crc kubenswrapper[4821]: I0930 17:45:01.640565 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" event={"ID":"6c81c7d5-179b-48c0-8d6c-7fd0217085c0","Type":"ContainerStarted","Data":"05d435758c7a82c7392724b771252e6196926387b93c8e48f5d210e4c494a850"} Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.013186 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.193246 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume\") pod \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.193394 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2fld\" (UniqueName: \"kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld\") pod \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.193427 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume\") pod \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\" (UID: \"6c81c7d5-179b-48c0-8d6c-7fd0217085c0\") " Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.194460 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "6c81c7d5-179b-48c0-8d6c-7fd0217085c0" (UID: "6c81c7d5-179b-48c0-8d6c-7fd0217085c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.198812 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6c81c7d5-179b-48c0-8d6c-7fd0217085c0" (UID: "6c81c7d5-179b-48c0-8d6c-7fd0217085c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.199230 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld" (OuterVolumeSpecName: "kube-api-access-n2fld") pod "6c81c7d5-179b-48c0-8d6c-7fd0217085c0" (UID: "6c81c7d5-179b-48c0-8d6c-7fd0217085c0"). InnerVolumeSpecName "kube-api-access-n2fld". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.295415 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2fld\" (UniqueName: \"kubernetes.io/projected/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-kube-api-access-n2fld\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.295483 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.295493 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c81c7d5-179b-48c0-8d6c-7fd0217085c0-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.657150 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" event={"ID":"6c81c7d5-179b-48c0-8d6c-7fd0217085c0","Type":"ContainerDied","Data":"05d435758c7a82c7392724b771252e6196926387b93c8e48f5d210e4c494a850"} Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.657190 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d435758c7a82c7392724b771252e6196926387b93c8e48f5d210e4c494a850" Sep 30 17:45:03 crc kubenswrapper[4821]: I0930 17:45:03.657189 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-6mt2h" Sep 30 17:45:04 crc kubenswrapper[4821]: I0930 17:45:04.076803 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r"] Sep 30 17:45:04 crc kubenswrapper[4821]: I0930 17:45:04.083542 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320860-mxq6r"] Sep 30 17:45:04 crc kubenswrapper[4821]: I0930 17:45:04.715179 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:45:04 crc kubenswrapper[4821]: E0930 17:45:04.715614 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:45:04 crc kubenswrapper[4821]: I0930 17:45:04.718686 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fb6d19-7b78-4122-9989-0676a86c33dd" path="/var/lib/kubelet/pods/56fb6d19-7b78-4122-9989-0676a86c33dd/volumes" Sep 30 17:45:17 crc kubenswrapper[4821]: I0930 17:45:17.707459 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:45:17 crc kubenswrapper[4821]: E0930 17:45:17.708135 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:45:29 crc kubenswrapper[4821]: I0930 17:45:29.961563 4821 scope.go:117] "RemoveContainer" containerID="c90ccf86686cd276de0d6201e7afea24a7f6056f7c35f9fab5e19810fa5e7ff4" Sep 30 17:45:32 crc kubenswrapper[4821]: I0930 17:45:32.708345 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:45:33 crc kubenswrapper[4821]: I0930 17:45:33.879363 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce"} Sep 30 17:47:49 crc kubenswrapper[4821]: I0930 17:47:49.349375 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:47:49 crc kubenswrapper[4821]: I0930 17:47:49.349829 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:48:19 crc kubenswrapper[4821]: I0930 17:48:19.350369 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:48:19 crc kubenswrapper[4821]: I0930 17:48:19.352467 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.121025 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:36 crc kubenswrapper[4821]: E0930 17:48:36.122337 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81c7d5-179b-48c0-8d6c-7fd0217085c0" containerName="collect-profiles" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.122363 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81c7d5-179b-48c0-8d6c-7fd0217085c0" containerName="collect-profiles" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.122661 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81c7d5-179b-48c0-8d6c-7fd0217085c0" containerName="collect-profiles" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.125165 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.158262 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.175449 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9fn\" (UniqueName: \"kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.175594 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.175633 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.276836 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr9fn\" (UniqueName: \"kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.276935 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.276960 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.277517 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.277959 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.305737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr9fn\" (UniqueName: \"kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn\") pod \"community-operators-kt2xs\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:36 crc kubenswrapper[4821]: I0930 17:48:36.456215 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:37 crc kubenswrapper[4821]: I0930 17:48:37.005984 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:37 crc kubenswrapper[4821]: I0930 17:48:37.367708 4821 generic.go:334] "Generic (PLEG): container finished" podID="7028cf15-8233-4ae1-a274-ebae055e042b" containerID="f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036" exitCode=0 Sep 30 17:48:37 crc kubenswrapper[4821]: I0930 17:48:37.367804 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerDied","Data":"f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036"} Sep 30 17:48:37 crc kubenswrapper[4821]: I0930 17:48:37.369293 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerStarted","Data":"5dbe5a8ebda5bbd9f74acdae00e42b7a2882b7bfe912a98c532192658987af42"} Sep 30 17:48:37 crc kubenswrapper[4821]: I0930 17:48:37.369669 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:48:38 crc kubenswrapper[4821]: I0930 17:48:38.377359 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerStarted","Data":"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8"} Sep 30 17:48:39 crc kubenswrapper[4821]: I0930 17:48:39.388183 4821 generic.go:334] "Generic (PLEG): container finished" podID="7028cf15-8233-4ae1-a274-ebae055e042b" containerID="05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8" exitCode=0 Sep 30 17:48:39 crc kubenswrapper[4821]: I0930 17:48:39.388295 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerDied","Data":"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8"} Sep 30 17:48:40 crc kubenswrapper[4821]: I0930 17:48:40.397533 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerStarted","Data":"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404"} Sep 30 17:48:40 crc kubenswrapper[4821]: I0930 17:48:40.414479 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kt2xs" podStartSLOduration=1.966251308 podStartE2EDuration="4.414464208s" podCreationTimestamp="2025-09-30 17:48:36 +0000 UTC" firstStartedPulling="2025-09-30 17:48:37.369372475 +0000 UTC m=+2713.274418419" lastFinishedPulling="2025-09-30 17:48:39.817585375 +0000 UTC m=+2715.722631319" observedRunningTime="2025-09-30 17:48:40.411991358 +0000 UTC m=+2716.317037312" watchObservedRunningTime="2025-09-30 17:48:40.414464208 +0000 UTC m=+2716.319510152" Sep 30 17:48:46 crc kubenswrapper[4821]: I0930 17:48:46.456925 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:46 crc kubenswrapper[4821]: I0930 17:48:46.457618 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:46 crc kubenswrapper[4821]: I0930 17:48:46.504504 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:47 crc kubenswrapper[4821]: I0930 17:48:47.486444 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:47 crc kubenswrapper[4821]: I0930 17:48:47.541035 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.349345 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.349691 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.349736 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.350505 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.350591 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce" gracePeriod=600 Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.462794 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kt2xs" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="registry-server" containerID="cri-o://eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404" gracePeriod=2 Sep 30 17:48:49 crc kubenswrapper[4821]: I0930 17:48:49.907322 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.024558 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content\") pod \"7028cf15-8233-4ae1-a274-ebae055e042b\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.024626 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities\") pod \"7028cf15-8233-4ae1-a274-ebae055e042b\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.025640 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities" (OuterVolumeSpecName: "utilities") pod "7028cf15-8233-4ae1-a274-ebae055e042b" (UID: "7028cf15-8233-4ae1-a274-ebae055e042b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.025736 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr9fn\" (UniqueName: \"kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn\") pod \"7028cf15-8233-4ae1-a274-ebae055e042b\" (UID: \"7028cf15-8233-4ae1-a274-ebae055e042b\") " Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.026369 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.030597 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn" (OuterVolumeSpecName: "kube-api-access-gr9fn") pod "7028cf15-8233-4ae1-a274-ebae055e042b" (UID: "7028cf15-8233-4ae1-a274-ebae055e042b"). InnerVolumeSpecName "kube-api-access-gr9fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.071038 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7028cf15-8233-4ae1-a274-ebae055e042b" (UID: "7028cf15-8233-4ae1-a274-ebae055e042b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.128498 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr9fn\" (UniqueName: \"kubernetes.io/projected/7028cf15-8233-4ae1-a274-ebae055e042b-kube-api-access-gr9fn\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.128527 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7028cf15-8233-4ae1-a274-ebae055e042b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.478382 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce" exitCode=0 Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.478471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce"} Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.479108 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerStarted","Data":"8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148"} Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.479144 4821 scope.go:117] "RemoveContainer" containerID="0626195f9dae72276db5956ba650de3351e14934ae2efe7e2e585943db5fdbd3" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.489097 4821 generic.go:334] "Generic (PLEG): container finished" podID="7028cf15-8233-4ae1-a274-ebae055e042b" containerID="eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404" exitCode=0 Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.489142 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerDied","Data":"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404"} Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.489172 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt2xs" event={"ID":"7028cf15-8233-4ae1-a274-ebae055e042b","Type":"ContainerDied","Data":"5dbe5a8ebda5bbd9f74acdae00e42b7a2882b7bfe912a98c532192658987af42"} Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.489223 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt2xs" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.538066 4821 scope.go:117] "RemoveContainer" containerID="eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.556293 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.564915 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kt2xs"] Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.565628 4821 scope.go:117] "RemoveContainer" containerID="05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.588228 4821 scope.go:117] "RemoveContainer" containerID="f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.630500 4821 scope.go:117] "RemoveContainer" containerID="eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404" Sep 30 17:48:50 crc kubenswrapper[4821]: E0930 17:48:50.630921 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404\": container with ID starting with eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404 not found: ID does not exist" containerID="eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.630959 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404"} err="failed to get container status \"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404\": rpc error: code = NotFound desc = could not find container \"eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404\": container with ID starting with eedf2bde0e03ac0a4aa25a3c676f034320953ce3426fade33e0d58cd7c9ad404 not found: ID does not exist" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.630985 4821 scope.go:117] "RemoveContainer" containerID="05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8" Sep 30 17:48:50 crc kubenswrapper[4821]: E0930 17:48:50.631510 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8\": container with ID starting with 05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8 not found: ID does not exist" containerID="05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.631535 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8"} err="failed to get container status \"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8\": rpc error: code = NotFound desc = could not find container \"05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8\": container with ID starting with 05bc602829bd75b5e32f7d7f9848cd6d0ee91483031231cb9161c52309e6f3c8 not found: ID does not exist" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.631550 4821 scope.go:117] "RemoveContainer" containerID="f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036" Sep 30 17:48:50 crc kubenswrapper[4821]: E0930 17:48:50.632202 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036\": container with ID starting with f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036 not found: ID does not exist" containerID="f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.632243 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036"} err="failed to get container status \"f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036\": rpc error: code = NotFound desc = could not find container \"f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036\": container with ID starting with f3e6914d8c8cac7a1ae7531927a46eb6fc31925b863028465c3cb434aa0b1036 not found: ID does not exist" Sep 30 17:48:50 crc kubenswrapper[4821]: I0930 17:48:50.718533 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" path="/var/lib/kubelet/pods/7028cf15-8233-4ae1-a274-ebae055e042b/volumes" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.218585 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wrlsz/must-gather-84smp"] Sep 30 17:49:14 crc kubenswrapper[4821]: E0930 17:49:14.219469 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="extract-content" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.219488 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="extract-content" Sep 30 17:49:14 crc kubenswrapper[4821]: E0930 17:49:14.219516 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="registry-server" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.219524 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="registry-server" Sep 30 17:49:14 crc kubenswrapper[4821]: E0930 17:49:14.219548 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="extract-utilities" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.219556 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="extract-utilities" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.219764 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7028cf15-8233-4ae1-a274-ebae055e042b" containerName="registry-server" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.220649 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.233741 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wrlsz"/"openshift-service-ca.crt" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.239926 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wrlsz"/"kube-root-ca.crt" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.310384 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsqw7\" (UniqueName: \"kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.310478 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.322867 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wrlsz/must-gather-84smp"] Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.412415 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsqw7\" (UniqueName: \"kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.412800 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.413300 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.433656 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsqw7\" (UniqueName: \"kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7\") pod \"must-gather-84smp\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.543908 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:49:14 crc kubenswrapper[4821]: I0930 17:49:14.997535 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wrlsz/must-gather-84smp"] Sep 30 17:49:15 crc kubenswrapper[4821]: I0930 17:49:15.705791 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/must-gather-84smp" event={"ID":"ac374dbb-98a9-423f-8b7d-399e602c571a","Type":"ContainerStarted","Data":"f69810bbd83feb8f3b2c6e3eacb37a8a88d800c348b589651f9955f7ebb58fcf"} Sep 30 17:49:20 crc kubenswrapper[4821]: I0930 17:49:20.748574 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/must-gather-84smp" event={"ID":"ac374dbb-98a9-423f-8b7d-399e602c571a","Type":"ContainerStarted","Data":"8715d8b232a1f3fb65228a662d22c405a699d6a8f7190aff4184944c9a6372c2"} Sep 30 17:49:20 crc kubenswrapper[4821]: I0930 17:49:20.750230 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/must-gather-84smp" event={"ID":"ac374dbb-98a9-423f-8b7d-399e602c571a","Type":"ContainerStarted","Data":"66df0333382032aadba8c45ad8c392b635acec370f762992d147a6283b88a1e9"} Sep 30 17:49:20 crc kubenswrapper[4821]: I0930 17:49:20.763516 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wrlsz/must-gather-84smp" podStartSLOduration=1.786526819 podStartE2EDuration="6.763500238s" podCreationTimestamp="2025-09-30 17:49:14 +0000 UTC" firstStartedPulling="2025-09-30 17:49:15.0051764 +0000 UTC m=+2750.910222344" lastFinishedPulling="2025-09-30 17:49:19.982149819 +0000 UTC m=+2755.887195763" observedRunningTime="2025-09-30 17:49:20.760289618 +0000 UTC m=+2756.665335562" watchObservedRunningTime="2025-09-30 17:49:20.763500238 +0000 UTC m=+2756.668546182" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.043479 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-9n4xj"] Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.045443 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.047420 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wrlsz"/"default-dockercfg-pd2rn" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.209145 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.209262 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtm7\" (UniqueName: \"kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.310584 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.310665 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtm7\" (UniqueName: \"kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.311189 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.357341 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtm7\" (UniqueName: \"kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7\") pod \"crc-debug-9n4xj\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.365267 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:49:25 crc kubenswrapper[4821]: I0930 17:49:25.780841 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" event={"ID":"e5661f8a-adc3-43a9-ba1f-71b866f48d07","Type":"ContainerStarted","Data":"db207f2d1332dc57ffc67d50851d48c589666bd29bb7bc14fdf2c99df2bd1258"} Sep 30 17:49:38 crc kubenswrapper[4821]: I0930 17:49:38.899026 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" event={"ID":"e5661f8a-adc3-43a9-ba1f-71b866f48d07","Type":"ContainerStarted","Data":"d833cbf49d7ae458ae0708213385fd8ddde07edcdcfcd03b184a28678ac07685"} Sep 30 17:49:38 crc kubenswrapper[4821]: I0930 17:49:38.916805 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" podStartSLOduration=1.032791268 podStartE2EDuration="13.916789997s" podCreationTimestamp="2025-09-30 17:49:25 +0000 UTC" firstStartedPulling="2025-09-30 17:49:25.409294463 +0000 UTC m=+2761.314340407" lastFinishedPulling="2025-09-30 17:49:38.293293192 +0000 UTC m=+2774.198339136" observedRunningTime="2025-09-30 17:49:38.914475899 +0000 UTC m=+2774.819521843" watchObservedRunningTime="2025-09-30 17:49:38.916789997 +0000 UTC m=+2774.821835941" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.238558 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3943f2e8-623c-4f63-b427-a9190a41608f/cinder-api/0.log" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.246229 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3943f2e8-623c-4f63-b427-a9190a41608f/cinder-api-log/0.log" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.479634 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_930ee415-1c81-4995-86aa-9ba2f22e81f0/cinder-scheduler/0.log" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.498768 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_930ee415-1c81-4995-86aa-9ba2f22e81f0/probe/0.log" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.741731 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-778d8bb9d7-5pq8p_fa3aca63-7477-4ea3-87f8-1d5ed010443c/init/0.log" Sep 30 17:50:23 crc kubenswrapper[4821]: I0930 17:50:23.978518 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-778d8bb9d7-5pq8p_fa3aca63-7477-4ea3-87f8-1d5ed010443c/init/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.045036 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-778d8bb9d7-5pq8p_fa3aca63-7477-4ea3-87f8-1d5ed010443c/dnsmasq-dns/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.183127 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_96ca9317-b33b-423b-9d59-3c9e9719c941/glance-log/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.221546 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_96ca9317-b33b-423b-9d59-3c9e9719c941/glance-httpd/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.390643 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_54601c55-069d-4baa-891a-e359cf501642/glance-httpd/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.432532 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_54601c55-069d-4baa-891a-e359cf501642/glance-log/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.697820 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78b9594fb8-nw9qj_4be55b7f-8f57-44f9-899b-d8e6676e5e02/horizon/0.log" Sep 30 17:50:24 crc kubenswrapper[4821]: I0930 17:50:24.918965 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78b9594fb8-nw9qj_4be55b7f-8f57-44f9-899b-d8e6676e5e02/horizon-log/0.log" Sep 30 17:50:25 crc kubenswrapper[4821]: I0930 17:50:25.052254 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-dcc94dcb7-4dt65_2463ed19-463b-4138-ba45-0890d3173e94/keystone-api/0.log" Sep 30 17:50:25 crc kubenswrapper[4821]: I0930 17:50:25.468628 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5785886597-f9l4l_f955b5ec-85cf-43fc-9a7c-8f20a510b015/neutron-api/0.log" Sep 30 17:50:25 crc kubenswrapper[4821]: I0930 17:50:25.641448 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5785886597-f9l4l_f955b5ec-85cf-43fc-9a7c-8f20a510b015/neutron-httpd/0.log" Sep 30 17:50:26 crc kubenswrapper[4821]: I0930 17:50:26.131502 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8e17523e-cf69-41f9-bc54-c6a8a9dcba94/nova-api-api/0.log" Sep 30 17:50:26 crc kubenswrapper[4821]: I0930 17:50:26.301569 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8e17523e-cf69-41f9-bc54-c6a8a9dcba94/nova-api-log/0.log" Sep 30 17:50:26 crc kubenswrapper[4821]: I0930 17:50:26.678787 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_54929d4b-fffc-44ff-b7fd-046e8e86334f/nova-cell0-conductor-conductor/0.log" Sep 30 17:50:27 crc kubenswrapper[4821]: I0930 17:50:27.028541 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1d8fbb09-93be-43d9-82dd-2de6db113d0c/nova-cell1-conductor-conductor/0.log" Sep 30 17:50:27 crc kubenswrapper[4821]: I0930 17:50:27.149536 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f7619b96-2d9a-4684-b08e-8e784c41e984/nova-cell1-novncproxy-novncproxy/0.log" Sep 30 17:50:27 crc kubenswrapper[4821]: I0930 17:50:27.341326 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cfc52755-8784-4644-beeb-f91f1ced1245/nova-metadata-log/0.log" Sep 30 17:50:27 crc kubenswrapper[4821]: I0930 17:50:27.823113 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_797f4c29-1e6b-4ecb-a85a-11e859b5b619/nova-scheduler-scheduler/0.log" Sep 30 17:50:28 crc kubenswrapper[4821]: I0930 17:50:28.050073 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_093b61b0-fba5-4f4e-8913-0c3700840535/mysql-bootstrap/0.log" Sep 30 17:50:28 crc kubenswrapper[4821]: I0930 17:50:28.375221 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_093b61b0-fba5-4f4e-8913-0c3700840535/mysql-bootstrap/0.log" Sep 30 17:50:28 crc kubenswrapper[4821]: I0930 17:50:28.385986 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_093b61b0-fba5-4f4e-8913-0c3700840535/galera/0.log" Sep 30 17:50:28 crc kubenswrapper[4821]: I0930 17:50:28.434263 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cfc52755-8784-4644-beeb-f91f1ced1245/nova-metadata-metadata/0.log" Sep 30 17:50:28 crc kubenswrapper[4821]: I0930 17:50:28.791316 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4fe717f3-bd67-4fbe-9f81-ba924767f2aa/mysql-bootstrap/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.038842 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4fe717f3-bd67-4fbe-9f81-ba924767f2aa/galera/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.042047 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4fe717f3-bd67-4fbe-9f81-ba924767f2aa/mysql-bootstrap/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.383939 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e0d1319-9b26-4169-8ccd-82687b2d7986/openstackclient/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.400837 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4bf4d_bb53300f-a5be-4cf1-a5db-7847ae0d7e12/ovn-controller/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.665337 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_96a178d8-73b6-4ba2-9976-b0544df00047/memcached/0.log" Sep 30 17:50:29 crc kubenswrapper[4821]: I0930 17:50:29.672489 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mlhfm_90a47708-f26b-40cc-b6c7-a436b54470e1/openstack-network-exporter/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.120190 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-72fq8_8e8e6a32-bf76-4c50-b5ae-15fb08fb9028/ovsdb-server-init/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.314717 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-72fq8_8e8e6a32-bf76-4c50-b5ae-15fb08fb9028/ovsdb-server-init/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.355709 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-72fq8_8e8e6a32-bf76-4c50-b5ae-15fb08fb9028/ovsdb-server/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.380324 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-72fq8_8e8e6a32-bf76-4c50-b5ae-15fb08fb9028/ovs-vswitchd/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.511993 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24bfc749-e770-4b60-95b1-869a694a9d70/openstack-network-exporter/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.619880 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_24bfc749-e770-4b60-95b1-869a694a9d70/ovn-northd/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.654217 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d955e25-1ea9-49f6-b98e-b431a8e82fa8/openstack-network-exporter/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.741060 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d955e25-1ea9-49f6-b98e-b431a8e82fa8/ovsdbserver-nb/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.862840 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e671ec19-1ea3-4632-93fb-c2e1616b1e33/openstack-network-exporter/0.log" Sep 30 17:50:30 crc kubenswrapper[4821]: I0930 17:50:30.917351 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e671ec19-1ea3-4632-93fb-c2e1616b1e33/ovsdbserver-sb/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.138798 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d47786ff6-8tnsh_dd99f742-0ed7-42e3-92f6-5af6acdf92d9/placement-api/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.204193 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d47786ff6-8tnsh_dd99f742-0ed7-42e3-92f6-5af6acdf92d9/placement-log/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.245717 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9564b951-f1dc-471d-b442-9fc27616e8b6/setup-container/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.561352 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9564b951-f1dc-471d-b442-9fc27616e8b6/rabbitmq/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.568806 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9564b951-f1dc-471d-b442-9fc27616e8b6/setup-container/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.647488 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c950f02-8f72-4d89-af10-660187db2344/setup-container/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.821671 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c950f02-8f72-4d89-af10-660187db2344/setup-container/0.log" Sep 30 17:50:31 crc kubenswrapper[4821]: I0930 17:50:31.857877 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c950f02-8f72-4d89-af10-660187db2344/rabbitmq/0.log" Sep 30 17:50:49 crc kubenswrapper[4821]: I0930 17:50:49.349792 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:50:49 crc kubenswrapper[4821]: I0930 17:50:49.350392 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:51:04 crc kubenswrapper[4821]: I0930 17:51:04.543984 4821 generic.go:334] "Generic (PLEG): container finished" podID="e5661f8a-adc3-43a9-ba1f-71b866f48d07" containerID="d833cbf49d7ae458ae0708213385fd8ddde07edcdcfcd03b184a28678ac07685" exitCode=0 Sep 30 17:51:04 crc kubenswrapper[4821]: I0930 17:51:04.544151 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" event={"ID":"e5661f8a-adc3-43a9-ba1f-71b866f48d07","Type":"ContainerDied","Data":"d833cbf49d7ae458ae0708213385fd8ddde07edcdcfcd03b184a28678ac07685"} Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.647603 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.687309 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-9n4xj"] Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.696456 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-9n4xj"] Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.770270 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtm7\" (UniqueName: \"kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7\") pod \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.770490 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host\") pod \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\" (UID: \"e5661f8a-adc3-43a9-ba1f-71b866f48d07\") " Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.770562 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host" (OuterVolumeSpecName: "host") pod "e5661f8a-adc3-43a9-ba1f-71b866f48d07" (UID: "e5661f8a-adc3-43a9-ba1f-71b866f48d07"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.771264 4821 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5661f8a-adc3-43a9-ba1f-71b866f48d07-host\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.783258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7" (OuterVolumeSpecName: "kube-api-access-cwtm7") pod "e5661f8a-adc3-43a9-ba1f-71b866f48d07" (UID: "e5661f8a-adc3-43a9-ba1f-71b866f48d07"). InnerVolumeSpecName "kube-api-access-cwtm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:05 crc kubenswrapper[4821]: I0930 17:51:05.874531 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtm7\" (UniqueName: \"kubernetes.io/projected/e5661f8a-adc3-43a9-ba1f-71b866f48d07-kube-api-access-cwtm7\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.561656 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db207f2d1332dc57ffc67d50851d48c589666bd29bb7bc14fdf2c99df2bd1258" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.561705 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-9n4xj" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.716743 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5661f8a-adc3-43a9-ba1f-71b866f48d07" path="/var/lib/kubelet/pods/e5661f8a-adc3-43a9-ba1f-71b866f48d07/volumes" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.836121 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-xhl7z"] Sep 30 17:51:06 crc kubenswrapper[4821]: E0930 17:51:06.836483 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5661f8a-adc3-43a9-ba1f-71b866f48d07" containerName="container-00" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.836498 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5661f8a-adc3-43a9-ba1f-71b866f48d07" containerName="container-00" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.836661 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5661f8a-adc3-43a9-ba1f-71b866f48d07" containerName="container-00" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.837241 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.840461 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wrlsz"/"default-dockercfg-pd2rn" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.993525 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:06 crc kubenswrapper[4821]: I0930 17:51:06.993727 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqcm\" (UniqueName: \"kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.095064 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsqcm\" (UniqueName: \"kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.095166 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.095323 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.122375 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsqcm\" (UniqueName: \"kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm\") pod \"crc-debug-xhl7z\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.156698 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:07 crc kubenswrapper[4821]: W0930 17:51:07.180336 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bb7b945_029d_4b37_add2_97e5067e3be2.slice/crio-d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67 WatchSource:0}: Error finding container d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67: Status 404 returned error can't find the container with id d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67 Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.570232 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" event={"ID":"0bb7b945-029d-4b37-add2-97e5067e3be2","Type":"ContainerStarted","Data":"09f5693293312257fff4cd1d3c0ba788f55bfb19aad31e803586f6dbfec8571e"} Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.570562 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" event={"ID":"0bb7b945-029d-4b37-add2-97e5067e3be2","Type":"ContainerStarted","Data":"d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67"} Sep 30 17:51:07 crc kubenswrapper[4821]: I0930 17:51:07.583248 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" podStartSLOduration=1.583225804 podStartE2EDuration="1.583225804s" podCreationTimestamp="2025-09-30 17:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:51:07.580587758 +0000 UTC m=+2863.485633712" watchObservedRunningTime="2025-09-30 17:51:07.583225804 +0000 UTC m=+2863.488271738" Sep 30 17:51:08 crc kubenswrapper[4821]: I0930 17:51:08.594119 4821 generic.go:334] "Generic (PLEG): container finished" podID="0bb7b945-029d-4b37-add2-97e5067e3be2" containerID="09f5693293312257fff4cd1d3c0ba788f55bfb19aad31e803586f6dbfec8571e" exitCode=0 Sep 30 17:51:08 crc kubenswrapper[4821]: I0930 17:51:08.594418 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" event={"ID":"0bb7b945-029d-4b37-add2-97e5067e3be2","Type":"ContainerDied","Data":"09f5693293312257fff4cd1d3c0ba788f55bfb19aad31e803586f6dbfec8571e"} Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.690564 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.841333 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsqcm\" (UniqueName: \"kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm\") pod \"0bb7b945-029d-4b37-add2-97e5067e3be2\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.841627 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host\") pod \"0bb7b945-029d-4b37-add2-97e5067e3be2\" (UID: \"0bb7b945-029d-4b37-add2-97e5067e3be2\") " Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.841787 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host" (OuterVolumeSpecName: "host") pod "0bb7b945-029d-4b37-add2-97e5067e3be2" (UID: "0bb7b945-029d-4b37-add2-97e5067e3be2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.842348 4821 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bb7b945-029d-4b37-add2-97e5067e3be2-host\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.860825 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm" (OuterVolumeSpecName: "kube-api-access-lsqcm") pod "0bb7b945-029d-4b37-add2-97e5067e3be2" (UID: "0bb7b945-029d-4b37-add2-97e5067e3be2"). InnerVolumeSpecName "kube-api-access-lsqcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:09 crc kubenswrapper[4821]: I0930 17:51:09.944548 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsqcm\" (UniqueName: \"kubernetes.io/projected/0bb7b945-029d-4b37-add2-97e5067e3be2-kube-api-access-lsqcm\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:10 crc kubenswrapper[4821]: I0930 17:51:10.611615 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" Sep 30 17:51:10 crc kubenswrapper[4821]: I0930 17:51:10.611572 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-xhl7z" event={"ID":"0bb7b945-029d-4b37-add2-97e5067e3be2","Type":"ContainerDied","Data":"d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67"} Sep 30 17:51:10 crc kubenswrapper[4821]: I0930 17:51:10.611694 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a570dd818c29955de0ba0f75beae85334fefa4fca9885eeb1c1c77caa6df67" Sep 30 17:51:12 crc kubenswrapper[4821]: I0930 17:51:12.150807 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-xhl7z"] Sep 30 17:51:12 crc kubenswrapper[4821]: I0930 17:51:12.156633 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-xhl7z"] Sep 30 17:51:12 crc kubenswrapper[4821]: I0930 17:51:12.716651 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb7b945-029d-4b37-add2-97e5067e3be2" path="/var/lib/kubelet/pods/0bb7b945-029d-4b37-add2-97e5067e3be2/volumes" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.331577 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-jfp46"] Sep 30 17:51:13 crc kubenswrapper[4821]: E0930 17:51:13.331924 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7b945-029d-4b37-add2-97e5067e3be2" containerName="container-00" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.331937 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7b945-029d-4b37-add2-97e5067e3be2" containerName="container-00" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.332130 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb7b945-029d-4b37-add2-97e5067e3be2" containerName="container-00" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.332641 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.335177 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wrlsz"/"default-dockercfg-pd2rn" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.496211 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.496745 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xdp\" (UniqueName: \"kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.599345 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.599427 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xdp\" (UniqueName: \"kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.599511 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.622911 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xdp\" (UniqueName: \"kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp\") pod \"crc-debug-jfp46\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:13 crc kubenswrapper[4821]: I0930 17:51:13.649661 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:14 crc kubenswrapper[4821]: I0930 17:51:14.643593 4821 generic.go:334] "Generic (PLEG): container finished" podID="b64513c5-310b-40e5-8a7c-8e094cb4f058" containerID="43b16abadc55ef922a6680b4545c2f3f83b0f53cfae28d1560fc82717663380d" exitCode=0 Sep 30 17:51:14 crc kubenswrapper[4821]: I0930 17:51:14.643699 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" event={"ID":"b64513c5-310b-40e5-8a7c-8e094cb4f058","Type":"ContainerDied","Data":"43b16abadc55ef922a6680b4545c2f3f83b0f53cfae28d1560fc82717663380d"} Sep 30 17:51:14 crc kubenswrapper[4821]: I0930 17:51:14.645126 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" event={"ID":"b64513c5-310b-40e5-8a7c-8e094cb4f058","Type":"ContainerStarted","Data":"ac5f64ab42da04a72a533494d10e71dedaf2c1b5f8fbfc584967ef7359929cbe"} Sep 30 17:51:14 crc kubenswrapper[4821]: I0930 17:51:14.686515 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-jfp46"] Sep 30 17:51:14 crc kubenswrapper[4821]: I0930 17:51:14.692328 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wrlsz/crc-debug-jfp46"] Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.739729 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.749467 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5xdp\" (UniqueName: \"kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp\") pod \"b64513c5-310b-40e5-8a7c-8e094cb4f058\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.749566 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host\") pod \"b64513c5-310b-40e5-8a7c-8e094cb4f058\" (UID: \"b64513c5-310b-40e5-8a7c-8e094cb4f058\") " Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.750443 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host" (OuterVolumeSpecName: "host") pod "b64513c5-310b-40e5-8a7c-8e094cb4f058" (UID: "b64513c5-310b-40e5-8a7c-8e094cb4f058"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.756017 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp" (OuterVolumeSpecName: "kube-api-access-l5xdp") pod "b64513c5-310b-40e5-8a7c-8e094cb4f058" (UID: "b64513c5-310b-40e5-8a7c-8e094cb4f058"). InnerVolumeSpecName "kube-api-access-l5xdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.851333 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5xdp\" (UniqueName: \"kubernetes.io/projected/b64513c5-310b-40e5-8a7c-8e094cb4f058-kube-api-access-l5xdp\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:15 crc kubenswrapper[4821]: I0930 17:51:15.851392 4821 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b64513c5-310b-40e5-8a7c-8e094cb4f058-host\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.371146 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:16 crc kubenswrapper[4821]: E0930 17:51:16.371631 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64513c5-310b-40e5-8a7c-8e094cb4f058" containerName="container-00" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.371655 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64513c5-310b-40e5-8a7c-8e094cb4f058" containerName="container-00" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.371917 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64513c5-310b-40e5-8a7c-8e094cb4f058" containerName="container-00" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.373436 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.380871 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.459914 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hpt\" (UniqueName: \"kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.459959 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.460234 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.461731 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/util/0.log" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.562144 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.562242 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4hpt\" (UniqueName: \"kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.562269 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.562727 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.562763 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.589146 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4hpt\" (UniqueName: \"kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt\") pod \"redhat-operators-2lkfp\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.660861 4821 scope.go:117] "RemoveContainer" containerID="43b16abadc55ef922a6680b4545c2f3f83b0f53cfae28d1560fc82717663380d" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.660966 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/crc-debug-jfp46" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.689352 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.700390 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/pull/0.log" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.727200 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64513c5-310b-40e5-8a7c-8e094cb4f058" path="/var/lib/kubelet/pods/b64513c5-310b-40e5-8a7c-8e094cb4f058/volumes" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.751618 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/pull/0.log" Sep 30 17:51:16 crc kubenswrapper[4821]: I0930 17:51:16.851981 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/util/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.063281 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/util/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.086844 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/extract/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.094886 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_53b081c75a08da3b0c48a12efaceb245f87acd066a088ac5229b65eb0exmk6n_e84be49a-d25d-43d4-a2ec-f2a1de45d92e/pull/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.219598 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.458907 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-7msm9_f8fa53cb-09d0-4d60-8b2f-8114904df38c/manager/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.523002 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-7msm9_f8fa53cb-09d0-4d60-8b2f-8114904df38c/kube-rbac-proxy/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.604353 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-p5svk_10efd7b7-19ec-41c1-871e-a44c8d0d8181/kube-rbac-proxy/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.669797 4821 generic.go:334] "Generic (PLEG): container finished" podID="670e65a7-e958-47f0-b254-8a544dbad68d" containerID="c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce" exitCode=0 Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.669855 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerDied","Data":"c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce"} Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.669909 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerStarted","Data":"a039e699855eb4b19680aadf07bf730bf1f20a42d86d2532a2c2aaa669d7804c"} Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.894406 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-p5svk_10efd7b7-19ec-41c1-871e-a44c8d0d8181/manager/0.log" Sep 30 17:51:17 crc kubenswrapper[4821]: I0930 17:51:17.966005 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-k8qkc_8941a980-0eba-405b-b73a-0d99cf87d170/manager/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.003693 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-k8qkc_8941a980-0eba-405b-b73a-0d99cf87d170/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.176153 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-8zftm_0f92490a-9edc-463e-afa8-35d5ff0fc449/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.300620 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-8zftm_0f92490a-9edc-463e-afa8-35d5ff0fc449/manager/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.343351 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-l5jlc_d46bd2b3-81c8-4425-b7d3-0df63252f647/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.405046 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-l5jlc_d46bd2b3-81c8-4425-b7d3-0df63252f647/manager/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.506690 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-qvfwd_d2e266d9-b27d-4b28-a69c-15245c94e1eb/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.580401 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-qvfwd_d2e266d9-b27d-4b28-a69c-15245c94e1eb/manager/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.636712 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-xg92t_9aa0f9eb-c484-4503-8a83-1cce3d3034c4/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.847439 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-cxn9w_483a7050-54fe-4ae7-bc69-55a4dff975f7/kube-rbac-proxy/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.855378 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-cxn9w_483a7050-54fe-4ae7-bc69-55a4dff975f7/manager/0.log" Sep 30 17:51:18 crc kubenswrapper[4821]: I0930 17:51:18.907724 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-xg92t_9aa0f9eb-c484-4503-8a83-1cce3d3034c4/manager/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.108821 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-h5h8d_4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b/kube-rbac-proxy/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.215023 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-h5h8d_4eda6bf4-b8c3-4c02-aead-2d3bacac7b3b/manager/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.313063 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-dh2hf_a8683557-33d9-4018-94eb-b65323379f05/kube-rbac-proxy/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.350572 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.350641 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.440154 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-dh2hf_a8683557-33d9-4018-94eb-b65323379f05/manager/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.473357 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-5cx4l_a232fb81-f800-4266-b287-ba2d7be562b8/kube-rbac-proxy/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.578373 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-5cx4l_a232fb81-f800-4266-b287-ba2d7be562b8/manager/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.687706 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerStarted","Data":"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e"} Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.734607 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-qcxjz_42826092-1d4a-4edd-b929-8ae464702936/kube-rbac-proxy/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.744919 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-qcxjz_42826092-1d4a-4edd-b929-8ae464702936/manager/0.log" Sep 30 17:51:19 crc kubenswrapper[4821]: I0930 17:51:19.904855 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-6przq_879eea6a-d132-4b52-a3ce-93a890f5275a/kube-rbac-proxy/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.062959 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-6przq_879eea6a-d132-4b52-a3ce-93a890f5275a/manager/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.066068 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-27fgr_0e9486f1-e0be-44d7-8789-af45165d2f81/kube-rbac-proxy/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.168471 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-27fgr_0e9486f1-e0be-44d7-8789-af45165d2f81/manager/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.350722 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v_f43d5417-95a3-4530-a722-cfb37a0caee7/manager/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.369805 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55f4778d9fdtd9v_f43d5417-95a3-4530-a722-cfb37a0caee7/kube-rbac-proxy/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.471190 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5468b64689-l2dt6_62c27bc7-995e-467d-8a66-9c26828da252/kube-rbac-proxy/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.691550 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-d8fdfd448-rrrhr_8e553a25-d53e-410a-9d98-288aeb2eb59e/kube-rbac-proxy/0.log" Sep 30 17:51:20 crc kubenswrapper[4821]: I0930 17:51:20.837748 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-d8fdfd448-rrrhr_8e553a25-d53e-410a-9d98-288aeb2eb59e/operator/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.018372 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vwq7v_febb682d-9e87-4109-957d-96338ba83785/registry-server/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.180549 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5468b64689-l2dt6_62c27bc7-995e-467d-8a66-9c26828da252/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.244049 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-4s74h_e83989d6-b6f2-40d9-add4-a332f4669966/kube-rbac-proxy/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.275111 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-4s74h_e83989d6-b6f2-40d9-add4-a332f4669966/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.362645 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-jnrfz_978128f9-1130-4524-b15e-97cebe35dbc5/kube-rbac-proxy/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.399832 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-jnrfz_978128f9-1130-4524-b15e-97cebe35dbc5/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.422557 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-wb5hw_ae386591-10fb-4e44-bd19-2c36cb821e7b/operator/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.548750 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-dxc88_f21f3a23-f85f-44eb-83ea-77d7fe338689/kube-rbac-proxy/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.645265 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-dxc88_f21f3a23-f85f-44eb-83ea-77d7fe338689/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.679245 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7bdb6cfb74-zrjwt_c39c49f5-6b1c-4961-9ee6-175732754086/kube-rbac-proxy/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.710589 4821 generic.go:334] "Generic (PLEG): container finished" podID="670e65a7-e958-47f0-b254-8a544dbad68d" containerID="61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e" exitCode=0 Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.710629 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerDied","Data":"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e"} Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.774432 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7bdb6cfb74-zrjwt_c39c49f5-6b1c-4961-9ee6-175732754086/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.895006 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-jt7sz_878ec077-3dfa-4498-989d-72f34f449923/manager/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.936926 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-jt7sz_878ec077-3dfa-4498-989d-72f34f449923/kube-rbac-proxy/0.log" Sep 30 17:51:21 crc kubenswrapper[4821]: I0930 17:51:21.959460 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-dm2zc_c71ada48-d571-4dc5-aa12-602adaa8bc94/kube-rbac-proxy/0.log" Sep 30 17:51:22 crc kubenswrapper[4821]: I0930 17:51:22.051444 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-dm2zc_c71ada48-d571-4dc5-aa12-602adaa8bc94/manager/0.log" Sep 30 17:51:22 crc kubenswrapper[4821]: I0930 17:51:22.750875 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerStarted","Data":"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851"} Sep 30 17:51:22 crc kubenswrapper[4821]: I0930 17:51:22.786219 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2lkfp" podStartSLOduration=2.284741222 podStartE2EDuration="6.786197878s" podCreationTimestamp="2025-09-30 17:51:16 +0000 UTC" firstStartedPulling="2025-09-30 17:51:17.672536159 +0000 UTC m=+2873.577582103" lastFinishedPulling="2025-09-30 17:51:22.173992825 +0000 UTC m=+2878.079038759" observedRunningTime="2025-09-30 17:51:22.779772169 +0000 UTC m=+2878.684818103" watchObservedRunningTime="2025-09-30 17:51:22.786197878 +0000 UTC m=+2878.691243822" Sep 30 17:51:26 crc kubenswrapper[4821]: I0930 17:51:26.689607 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:26 crc kubenswrapper[4821]: I0930 17:51:26.690917 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:27 crc kubenswrapper[4821]: I0930 17:51:27.749307 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lkfp" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="registry-server" probeResult="failure" output=< Sep 30 17:51:27 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Sep 30 17:51:27 crc kubenswrapper[4821]: > Sep 30 17:51:36 crc kubenswrapper[4821]: I0930 17:51:36.740875 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:36 crc kubenswrapper[4821]: I0930 17:51:36.795067 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:36 crc kubenswrapper[4821]: I0930 17:51:36.974916 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:37 crc kubenswrapper[4821]: I0930 17:51:37.852192 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2lkfp" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="registry-server" containerID="cri-o://262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851" gracePeriod=2 Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.304453 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.386191 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-crvsh_6b098eac-8578-4bea-ae1d-af41fc24e2b7/control-plane-machine-set-operator/0.log" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.439348 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content\") pod \"670e65a7-e958-47f0-b254-8a544dbad68d\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.439419 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4hpt\" (UniqueName: \"kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt\") pod \"670e65a7-e958-47f0-b254-8a544dbad68d\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.439477 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities\") pod \"670e65a7-e958-47f0-b254-8a544dbad68d\" (UID: \"670e65a7-e958-47f0-b254-8a544dbad68d\") " Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.440495 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities" (OuterVolumeSpecName: "utilities") pod "670e65a7-e958-47f0-b254-8a544dbad68d" (UID: "670e65a7-e958-47f0-b254-8a544dbad68d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.446442 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt" (OuterVolumeSpecName: "kube-api-access-b4hpt") pod "670e65a7-e958-47f0-b254-8a544dbad68d" (UID: "670e65a7-e958-47f0-b254-8a544dbad68d"). InnerVolumeSpecName "kube-api-access-b4hpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.545101 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4hpt\" (UniqueName: \"kubernetes.io/projected/670e65a7-e958-47f0-b254-8a544dbad68d-kube-api-access-b4hpt\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.545129 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.556457 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gpsch_75516b13-a330-4e17-a2e1-bd1c04ad9500/kube-rbac-proxy/0.log" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.565382 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "670e65a7-e958-47f0-b254-8a544dbad68d" (UID: "670e65a7-e958-47f0-b254-8a544dbad68d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.626689 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gpsch_75516b13-a330-4e17-a2e1-bd1c04ad9500/machine-api-operator/0.log" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.646389 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e65a7-e958-47f0-b254-8a544dbad68d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.860233 4821 generic.go:334] "Generic (PLEG): container finished" podID="670e65a7-e958-47f0-b254-8a544dbad68d" containerID="262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851" exitCode=0 Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.860307 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lkfp" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.860348 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerDied","Data":"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851"} Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.861342 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lkfp" event={"ID":"670e65a7-e958-47f0-b254-8a544dbad68d","Type":"ContainerDied","Data":"a039e699855eb4b19680aadf07bf730bf1f20a42d86d2532a2c2aaa669d7804c"} Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.861366 4821 scope.go:117] "RemoveContainer" containerID="262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.879007 4821 scope.go:117] "RemoveContainer" containerID="61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.897206 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.901979 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2lkfp"] Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.911223 4821 scope.go:117] "RemoveContainer" containerID="c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.950524 4821 scope.go:117] "RemoveContainer" containerID="262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851" Sep 30 17:51:38 crc kubenswrapper[4821]: E0930 17:51:38.952789 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851\": container with ID starting with 262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851 not found: ID does not exist" containerID="262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.952858 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851"} err="failed to get container status \"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851\": rpc error: code = NotFound desc = could not find container \"262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851\": container with ID starting with 262f96c799ccee02aef1c62bce3451590920024b1d94298ea97c49ac082c3851 not found: ID does not exist" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.952897 4821 scope.go:117] "RemoveContainer" containerID="61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e" Sep 30 17:51:38 crc kubenswrapper[4821]: E0930 17:51:38.953513 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e\": container with ID starting with 61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e not found: ID does not exist" containerID="61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.953629 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e"} err="failed to get container status \"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e\": rpc error: code = NotFound desc = could not find container \"61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e\": container with ID starting with 61fe74b8ddb97680e51d00ba7777e05d5084e5a4ee80c372addbbe838f60e75e not found: ID does not exist" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.953745 4821 scope.go:117] "RemoveContainer" containerID="c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce" Sep 30 17:51:38 crc kubenswrapper[4821]: E0930 17:51:38.958071 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce\": container with ID starting with c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce not found: ID does not exist" containerID="c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce" Sep 30 17:51:38 crc kubenswrapper[4821]: I0930 17:51:38.958244 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce"} err="failed to get container status \"c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce\": rpc error: code = NotFound desc = could not find container \"c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce\": container with ID starting with c09aefbed3dc31108f7211f7fc59ea1bd55ede164118ced3f05f5fafddca74ce not found: ID does not exist" Sep 30 17:51:40 crc kubenswrapper[4821]: I0930 17:51:40.719697 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" path="/var/lib/kubelet/pods/670e65a7-e958-47f0-b254-8a544dbad68d/volumes" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.349558 4821 patch_prober.go:28] interesting pod/machine-config-daemon-q2xpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.350139 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.350184 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.350903 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148"} pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.350947 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerName="machine-config-daemon" containerID="cri-o://8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" gracePeriod=600 Sep 30 17:51:49 crc kubenswrapper[4821]: E0930 17:51:49.484838 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.944209 4821 generic.go:334] "Generic (PLEG): container finished" podID="1c2ce348-eadc-4629-a03f-fb8924b5b434" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" exitCode=0 Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.944393 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" event={"ID":"1c2ce348-eadc-4629-a03f-fb8924b5b434","Type":"ContainerDied","Data":"8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148"} Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.944532 4821 scope.go:117] "RemoveContainer" containerID="5cf50fdd875b9603c0f2935c268e8a354214b5cbc70ddfbf503b7db418b807ce" Sep 30 17:51:49 crc kubenswrapper[4821]: I0930 17:51:49.945119 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:51:49 crc kubenswrapper[4821]: E0930 17:51:49.945348 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:51:50 crc kubenswrapper[4821]: I0930 17:51:50.049307 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-fs727_73e8eb4d-cb66-48ae-b04b-303bb8e66a6e/cert-manager-controller/0.log" Sep 30 17:51:50 crc kubenswrapper[4821]: I0930 17:51:50.185100 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-j9fkd_75333b8a-3bd4-4aed-8dec-1399b3b8d7f8/cert-manager-cainjector/0.log" Sep 30 17:51:50 crc kubenswrapper[4821]: I0930 17:51:50.245270 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-rj4sb_0c1c8a34-9395-4425-b589-dc71349c9cbe/cert-manager-webhook/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.456627 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-s6qlq_838dc90c-5925-4cc0-9f35-2a4efc53adc9/nmstate-console-plugin/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.633761 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7vzn5_58c46502-d375-4f8d-80fb-e43798a3d459/nmstate-handler/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.663803 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xlwkz_a435cb08-e538-4898-845f-cb093a28d190/nmstate-metrics/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.725235 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xlwkz_a435cb08-e538-4898-845f-cb093a28d190/kube-rbac-proxy/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.850736 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-fl26l_5fa2210d-5050-4669-91fb-2fcb41e8bb1c/nmstate-operator/0.log" Sep 30 17:52:01 crc kubenswrapper[4821]: I0930 17:52:01.956142 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-7sb9q_843eeb31-9be1-4632-a58a-0bbe45efa603/nmstate-webhook/0.log" Sep 30 17:52:04 crc kubenswrapper[4821]: I0930 17:52:04.711304 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:52:04 crc kubenswrapper[4821]: E0930 17:52:04.712065 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.260255 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-k9l8b_3057dbb5-a3f4-46ec-a33e-187a35d695a9/kube-rbac-proxy/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.389118 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-k9l8b_3057dbb5-a3f4-46ec-a33e-187a35d695a9/controller/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.448498 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-9r6qh_9deaad26-049a-4380-99c4-8d34358367af/frr-k8s-webhook-server/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.595859 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-frr-files/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.769402 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-frr-files/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.810731 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-reloader/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.817133 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-metrics/0.log" Sep 30 17:52:15 crc kubenswrapper[4821]: I0930 17:52:15.841129 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-reloader/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.009022 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-metrics/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.046798 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-reloader/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.052787 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-frr-files/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.070890 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-metrics/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.281761 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-frr-files/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.297386 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-metrics/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.299447 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/cp-reloader/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.364771 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/controller/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.502592 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/kube-rbac-proxy/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.533859 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/frr-metrics/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.590057 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/kube-rbac-proxy-frr/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.701962 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/reloader/0.log" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.706731 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:52:16 crc kubenswrapper[4821]: E0930 17:52:16.706977 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:52:16 crc kubenswrapper[4821]: I0930 17:52:16.967509 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-56f8dc8465-bxgfl_c823710e-442d-4956-aaff-8822ff222043/manager/0.log" Sep 30 17:52:17 crc kubenswrapper[4821]: I0930 17:52:17.087447 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c79f4dfd9-xvwvr_7b152dda-dec3-4365-bc77-cb8e52ca5cb0/webhook-server/0.log" Sep 30 17:52:17 crc kubenswrapper[4821]: I0930 17:52:17.314167 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xnl56_d38ffd03-48b8-4684-aff0-089081da1320/frr/0.log" Sep 30 17:52:17 crc kubenswrapper[4821]: I0930 17:52:17.362857 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-df78k_54bb31d4-ac1a-4dcc-acaa-6dd8f4452921/kube-rbac-proxy/0.log" Sep 30 17:52:17 crc kubenswrapper[4821]: I0930 17:52:17.717454 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-df78k_54bb31d4-ac1a-4dcc-acaa-6dd8f4452921/speaker/0.log" Sep 30 17:52:28 crc kubenswrapper[4821]: I0930 17:52:28.706970 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:52:28 crc kubenswrapper[4821]: E0930 17:52:28.708521 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.370343 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/util/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.488708 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/util/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.501236 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/pull/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.540548 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/pull/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.711661 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/util/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.723908 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/pull/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.807693 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcmzczg_a04a670a-fd36-4b30-be56-f31c9da6f350/extract/0.log" Sep 30 17:52:30 crc kubenswrapper[4821]: I0930 17:52:30.888508 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-utilities/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.107948 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-utilities/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.116195 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-content/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.139687 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-content/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.318625 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-utilities/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.351650 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/extract-content/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.592139 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nm7zd_b2baa7a3-2088-4b6b-8bef-d629dc402b87/registry-server/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.680431 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-utilities/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.820741 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-content/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.837721 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-utilities/0.log" Sep 30 17:52:31 crc kubenswrapper[4821]: I0930 17:52:31.838808 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-content/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.056388 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-utilities/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.071031 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/extract-content/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.280492 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/util/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.508994 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ljdkf_5dac66f0-9520-438e-aefe-321f0a63733e/registry-server/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.738071 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/pull/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.816313 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/util/0.log" Sep 30 17:52:32 crc kubenswrapper[4821]: I0930 17:52:32.829931 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/pull/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.023735 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/extract/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.055620 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/util/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.069827 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r8xzg_3cf49a14-2605-4bfb-9dce-04b1438b107c/pull/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.288351 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-utilities/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.381345 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g6ghn_8b3d34e8-81c3-4214-a3d9-a3d787b69b9a/marketplace-operator/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.564108 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-content/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.652927 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-utilities/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.676293 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-content/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.772722 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-utilities/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.843469 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/extract-content/0.log" Sep 30 17:52:33 crc kubenswrapper[4821]: I0930 17:52:33.961421 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9gmq_a4dcc63c-3f2c-413b-a521-ef2edb6d45bd/registry-server/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.063171 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-utilities/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.190570 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-utilities/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.236999 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-content/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.241817 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-content/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.409737 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-content/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.419296 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/extract-utilities/0.log" Sep 30 17:52:34 crc kubenswrapper[4821]: I0930 17:52:34.797040 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-c6d9b_68f64d5d-0c46-4199-977c-a9d7820a9c80/registry-server/0.log" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.200763 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:40 crc kubenswrapper[4821]: E0930 17:52:40.201817 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="extract-utilities" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.201837 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="extract-utilities" Sep 30 17:52:40 crc kubenswrapper[4821]: E0930 17:52:40.201870 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="registry-server" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.201879 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="registry-server" Sep 30 17:52:40 crc kubenswrapper[4821]: E0930 17:52:40.201903 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="extract-content" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.201912 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="extract-content" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.202180 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="670e65a7-e958-47f0-b254-8a544dbad68d" containerName="registry-server" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.203845 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.231741 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.245618 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.245672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.245719 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztbq\" (UniqueName: \"kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.347470 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.347548 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.347620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bztbq\" (UniqueName: \"kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.348220 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.348255 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.375136 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bztbq\" (UniqueName: \"kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq\") pod \"redhat-marketplace-sw7pk\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:40 crc kubenswrapper[4821]: I0930 17:52:40.536408 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:41 crc kubenswrapper[4821]: I0930 17:52:41.041449 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:41 crc kubenswrapper[4821]: I0930 17:52:41.313349 4821 generic.go:334] "Generic (PLEG): container finished" podID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerID="8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2" exitCode=0 Sep 30 17:52:41 crc kubenswrapper[4821]: I0930 17:52:41.313647 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerDied","Data":"8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2"} Sep 30 17:52:41 crc kubenswrapper[4821]: I0930 17:52:41.313674 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerStarted","Data":"c895404b0723a6a9b569d58d795adc9ef6d5b3d355861baa5ad74d96356f05dc"} Sep 30 17:52:42 crc kubenswrapper[4821]: I0930 17:52:42.326587 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerStarted","Data":"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb"} Sep 30 17:52:43 crc kubenswrapper[4821]: I0930 17:52:43.336075 4821 generic.go:334] "Generic (PLEG): container finished" podID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerID="147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb" exitCode=0 Sep 30 17:52:43 crc kubenswrapper[4821]: I0930 17:52:43.336162 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerDied","Data":"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb"} Sep 30 17:52:43 crc kubenswrapper[4821]: I0930 17:52:43.706692 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:52:43 crc kubenswrapper[4821]: E0930 17:52:43.707364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:52:44 crc kubenswrapper[4821]: I0930 17:52:44.346520 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerStarted","Data":"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de"} Sep 30 17:52:50 crc kubenswrapper[4821]: I0930 17:52:50.536630 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:50 crc kubenswrapper[4821]: I0930 17:52:50.537162 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:50 crc kubenswrapper[4821]: I0930 17:52:50.580260 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:50 crc kubenswrapper[4821]: I0930 17:52:50.608867 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sw7pk" podStartSLOduration=8.20497896 podStartE2EDuration="10.60885009s" podCreationTimestamp="2025-09-30 17:52:40 +0000 UTC" firstStartedPulling="2025-09-30 17:52:41.317487545 +0000 UTC m=+2957.222533489" lastFinishedPulling="2025-09-30 17:52:43.721358665 +0000 UTC m=+2959.626404619" observedRunningTime="2025-09-30 17:52:44.370528667 +0000 UTC m=+2960.275574611" watchObservedRunningTime="2025-09-30 17:52:50.60885009 +0000 UTC m=+2966.513896034" Sep 30 17:52:51 crc kubenswrapper[4821]: I0930 17:52:51.460327 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:51 crc kubenswrapper[4821]: I0930 17:52:51.513127 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:53 crc kubenswrapper[4821]: I0930 17:52:53.411483 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sw7pk" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="registry-server" containerID="cri-o://be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de" gracePeriod=2 Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.024476 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.166062 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bztbq\" (UniqueName: \"kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq\") pod \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.166211 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content\") pod \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.166395 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities\") pod \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\" (UID: \"a800ba25-5ec6-4f87-8a22-0cab273c87d7\") " Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.166852 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities" (OuterVolumeSpecName: "utilities") pod "a800ba25-5ec6-4f87-8a22-0cab273c87d7" (UID: "a800ba25-5ec6-4f87-8a22-0cab273c87d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.178400 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a800ba25-5ec6-4f87-8a22-0cab273c87d7" (UID: "a800ba25-5ec6-4f87-8a22-0cab273c87d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.186265 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq" (OuterVolumeSpecName: "kube-api-access-bztbq") pod "a800ba25-5ec6-4f87-8a22-0cab273c87d7" (UID: "a800ba25-5ec6-4f87-8a22-0cab273c87d7"). InnerVolumeSpecName "kube-api-access-bztbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.268846 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.268887 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bztbq\" (UniqueName: \"kubernetes.io/projected/a800ba25-5ec6-4f87-8a22-0cab273c87d7-kube-api-access-bztbq\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.268897 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a800ba25-5ec6-4f87-8a22-0cab273c87d7-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.419531 4821 generic.go:334] "Generic (PLEG): container finished" podID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerID="be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de" exitCode=0 Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.419579 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerDied","Data":"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de"} Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.419616 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw7pk" event={"ID":"a800ba25-5ec6-4f87-8a22-0cab273c87d7","Type":"ContainerDied","Data":"c895404b0723a6a9b569d58d795adc9ef6d5b3d355861baa5ad74d96356f05dc"} Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.419626 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw7pk" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.419636 4821 scope.go:117] "RemoveContainer" containerID="be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.438680 4821 scope.go:117] "RemoveContainer" containerID="147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.465142 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.472993 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw7pk"] Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.479677 4821 scope.go:117] "RemoveContainer" containerID="8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.512724 4821 scope.go:117] "RemoveContainer" containerID="be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de" Sep 30 17:52:54 crc kubenswrapper[4821]: E0930 17:52:54.521583 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de\": container with ID starting with be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de not found: ID does not exist" containerID="be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.521628 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de"} err="failed to get container status \"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de\": rpc error: code = NotFound desc = could not find container \"be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de\": container with ID starting with be2281dfa06ba734bd5659f92c2d0cb07ab2bd74259931810d3019505279d5de not found: ID does not exist" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.521655 4821 scope.go:117] "RemoveContainer" containerID="147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb" Sep 30 17:52:54 crc kubenswrapper[4821]: E0930 17:52:54.521966 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb\": container with ID starting with 147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb not found: ID does not exist" containerID="147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.522009 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb"} err="failed to get container status \"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb\": rpc error: code = NotFound desc = could not find container \"147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb\": container with ID starting with 147be45d445a9c2c6cfeab88b804b2734df35f9373933ec95803d11ede1cc7eb not found: ID does not exist" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.522036 4821 scope.go:117] "RemoveContainer" containerID="8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2" Sep 30 17:52:54 crc kubenswrapper[4821]: E0930 17:52:54.522329 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2\": container with ID starting with 8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2 not found: ID does not exist" containerID="8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.522352 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2"} err="failed to get container status \"8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2\": rpc error: code = NotFound desc = could not find container \"8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2\": container with ID starting with 8266f27496bb5c4c24cff488070f972981d2090458c7a5c4c3bd829f2b6a0fd2 not found: ID does not exist" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.712822 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:52:54 crc kubenswrapper[4821]: E0930 17:52:54.713242 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:52:54 crc kubenswrapper[4821]: I0930 17:52:54.723302 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" path="/var/lib/kubelet/pods/a800ba25-5ec6-4f87-8a22-0cab273c87d7/volumes" Sep 30 17:53:06 crc kubenswrapper[4821]: I0930 17:53:06.709577 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:53:06 crc kubenswrapper[4821]: E0930 17:53:06.711496 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.656772 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:17 crc kubenswrapper[4821]: E0930 17:53:17.657774 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="registry-server" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.657791 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="registry-server" Sep 30 17:53:17 crc kubenswrapper[4821]: E0930 17:53:17.657818 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="extract-utilities" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.657828 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="extract-utilities" Sep 30 17:53:17 crc kubenswrapper[4821]: E0930 17:53:17.657872 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="extract-content" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.657881 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="extract-content" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.658156 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a800ba25-5ec6-4f87-8a22-0cab273c87d7" containerName="registry-server" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.659792 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.664692 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.708229 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:53:17 crc kubenswrapper[4821]: E0930 17:53:17.708464 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.837219 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.837429 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8fx\" (UniqueName: \"kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.837494 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.942704 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.942815 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8fx\" (UniqueName: \"kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.942851 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.943662 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.943912 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:17 crc kubenswrapper[4821]: I0930 17:53:17.986339 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8fx\" (UniqueName: \"kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx\") pod \"certified-operators-ggx2l\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:18 crc kubenswrapper[4821]: I0930 17:53:18.031488 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:18 crc kubenswrapper[4821]: I0930 17:53:18.597278 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:18 crc kubenswrapper[4821]: I0930 17:53:18.650157 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerStarted","Data":"afd0841042ba86eac02bd3c8aee38251b2f3fa61afb4d951525be759ec8c3cc7"} Sep 30 17:53:19 crc kubenswrapper[4821]: I0930 17:53:19.658040 4821 generic.go:334] "Generic (PLEG): container finished" podID="21d07709-7dd7-419d-acce-e026a90cceeb" containerID="bafc565b01dc345abbf9788acf2cb6798224090968d1484f417e0820dc6978bb" exitCode=0 Sep 30 17:53:19 crc kubenswrapper[4821]: I0930 17:53:19.658200 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerDied","Data":"bafc565b01dc345abbf9788acf2cb6798224090968d1484f417e0820dc6978bb"} Sep 30 17:53:20 crc kubenswrapper[4821]: I0930 17:53:20.666624 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerStarted","Data":"834d11c98246859096fbe14497d01c4135fea352823c5654ce9c06f8911cca63"} Sep 30 17:53:21 crc kubenswrapper[4821]: I0930 17:53:21.675396 4821 generic.go:334] "Generic (PLEG): container finished" podID="21d07709-7dd7-419d-acce-e026a90cceeb" containerID="834d11c98246859096fbe14497d01c4135fea352823c5654ce9c06f8911cca63" exitCode=0 Sep 30 17:53:21 crc kubenswrapper[4821]: I0930 17:53:21.675727 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerDied","Data":"834d11c98246859096fbe14497d01c4135fea352823c5654ce9c06f8911cca63"} Sep 30 17:53:22 crc kubenswrapper[4821]: I0930 17:53:22.685164 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerStarted","Data":"dc9ec21881d7822a62dbf192506445c957aa0240874713a2abfeacf26b5a1e92"} Sep 30 17:53:22 crc kubenswrapper[4821]: I0930 17:53:22.708646 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggx2l" podStartSLOduration=3.2847806840000002 podStartE2EDuration="5.70862858s" podCreationTimestamp="2025-09-30 17:53:17 +0000 UTC" firstStartedPulling="2025-09-30 17:53:19.659841523 +0000 UTC m=+2995.564887467" lastFinishedPulling="2025-09-30 17:53:22.083689419 +0000 UTC m=+2997.988735363" observedRunningTime="2025-09-30 17:53:22.703680107 +0000 UTC m=+2998.608726051" watchObservedRunningTime="2025-09-30 17:53:22.70862858 +0000 UTC m=+2998.613674524" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.032365 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.033674 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.088391 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.707048 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:53:28 crc kubenswrapper[4821]: E0930 17:53:28.707401 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.804478 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:28 crc kubenswrapper[4821]: I0930 17:53:28.852203 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:30 crc kubenswrapper[4821]: I0930 17:53:30.757383 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggx2l" podUID="21d07709-7dd7-419d-acce-e026a90cceeb" containerName="registry-server" containerID="cri-o://dc9ec21881d7822a62dbf192506445c957aa0240874713a2abfeacf26b5a1e92" gracePeriod=2 Sep 30 17:53:31 crc kubenswrapper[4821]: I0930 17:53:31.765408 4821 generic.go:334] "Generic (PLEG): container finished" podID="21d07709-7dd7-419d-acce-e026a90cceeb" containerID="dc9ec21881d7822a62dbf192506445c957aa0240874713a2abfeacf26b5a1e92" exitCode=0 Sep 30 17:53:31 crc kubenswrapper[4821]: I0930 17:53:31.765754 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerDied","Data":"dc9ec21881d7822a62dbf192506445c957aa0240874713a2abfeacf26b5a1e92"} Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.277764 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.311126 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n8fx\" (UniqueName: \"kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx\") pod \"21d07709-7dd7-419d-acce-e026a90cceeb\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.311317 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities\") pod \"21d07709-7dd7-419d-acce-e026a90cceeb\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.311385 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content\") pod \"21d07709-7dd7-419d-acce-e026a90cceeb\" (UID: \"21d07709-7dd7-419d-acce-e026a90cceeb\") " Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.316800 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities" (OuterVolumeSpecName: "utilities") pod "21d07709-7dd7-419d-acce-e026a90cceeb" (UID: "21d07709-7dd7-419d-acce-e026a90cceeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.352626 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx" (OuterVolumeSpecName: "kube-api-access-5n8fx") pod "21d07709-7dd7-419d-acce-e026a90cceeb" (UID: "21d07709-7dd7-419d-acce-e026a90cceeb"). InnerVolumeSpecName "kube-api-access-5n8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.361323 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21d07709-7dd7-419d-acce-e026a90cceeb" (UID: "21d07709-7dd7-419d-acce-e026a90cceeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.412850 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n8fx\" (UniqueName: \"kubernetes.io/projected/21d07709-7dd7-419d-acce-e026a90cceeb-kube-api-access-5n8fx\") on node \"crc\" DevicePath \"\"" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.412899 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.412910 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d07709-7dd7-419d-acce-e026a90cceeb-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.775873 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggx2l" event={"ID":"21d07709-7dd7-419d-acce-e026a90cceeb","Type":"ContainerDied","Data":"afd0841042ba86eac02bd3c8aee38251b2f3fa61afb4d951525be759ec8c3cc7"} Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.775939 4821 scope.go:117] "RemoveContainer" containerID="dc9ec21881d7822a62dbf192506445c957aa0240874713a2abfeacf26b5a1e92" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.776159 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggx2l" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.799685 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.800063 4821 scope.go:117] "RemoveContainer" containerID="834d11c98246859096fbe14497d01c4135fea352823c5654ce9c06f8911cca63" Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.810965 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggx2l"] Sep 30 17:53:32 crc kubenswrapper[4821]: I0930 17:53:32.831793 4821 scope.go:117] "RemoveContainer" containerID="bafc565b01dc345abbf9788acf2cb6798224090968d1484f417e0820dc6978bb" Sep 30 17:53:34 crc kubenswrapper[4821]: I0930 17:53:34.718441 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d07709-7dd7-419d-acce-e026a90cceeb" path="/var/lib/kubelet/pods/21d07709-7dd7-419d-acce-e026a90cceeb/volumes" Sep 30 17:53:42 crc kubenswrapper[4821]: I0930 17:53:42.706786 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:53:42 crc kubenswrapper[4821]: E0930 17:53:42.707621 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:53:55 crc kubenswrapper[4821]: I0930 17:53:55.707681 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:53:55 crc kubenswrapper[4821]: E0930 17:53:55.708476 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:54:08 crc kubenswrapper[4821]: I0930 17:54:08.708766 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:54:08 crc kubenswrapper[4821]: E0930 17:54:08.709397 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:54:16 crc kubenswrapper[4821]: I0930 17:54:16.110838 4821 generic.go:334] "Generic (PLEG): container finished" podID="ac374dbb-98a9-423f-8b7d-399e602c571a" containerID="66df0333382032aadba8c45ad8c392b635acec370f762992d147a6283b88a1e9" exitCode=0 Sep 30 17:54:16 crc kubenswrapper[4821]: I0930 17:54:16.110970 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wrlsz/must-gather-84smp" event={"ID":"ac374dbb-98a9-423f-8b7d-399e602c571a","Type":"ContainerDied","Data":"66df0333382032aadba8c45ad8c392b635acec370f762992d147a6283b88a1e9"} Sep 30 17:54:16 crc kubenswrapper[4821]: I0930 17:54:16.111828 4821 scope.go:117] "RemoveContainer" containerID="66df0333382032aadba8c45ad8c392b635acec370f762992d147a6283b88a1e9" Sep 30 17:54:16 crc kubenswrapper[4821]: I0930 17:54:16.381763 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wrlsz_must-gather-84smp_ac374dbb-98a9-423f-8b7d-399e602c571a/gather/0.log" Sep 30 17:54:19 crc kubenswrapper[4821]: I0930 17:54:19.707302 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:54:19 crc kubenswrapper[4821]: E0930 17:54:19.708111 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:54:24 crc kubenswrapper[4821]: I0930 17:54:24.743617 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wrlsz/must-gather-84smp"] Sep 30 17:54:24 crc kubenswrapper[4821]: I0930 17:54:24.744671 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wrlsz/must-gather-84smp" podUID="ac374dbb-98a9-423f-8b7d-399e602c571a" containerName="copy" containerID="cri-o://8715d8b232a1f3fb65228a662d22c405a699d6a8f7190aff4184944c9a6372c2" gracePeriod=2 Sep 30 17:54:24 crc kubenswrapper[4821]: I0930 17:54:24.749874 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wrlsz/must-gather-84smp"] Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.227290 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wrlsz_must-gather-84smp_ac374dbb-98a9-423f-8b7d-399e602c571a/copy/0.log" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.228778 4821 generic.go:334] "Generic (PLEG): container finished" podID="ac374dbb-98a9-423f-8b7d-399e602c571a" containerID="8715d8b232a1f3fb65228a662d22c405a699d6a8f7190aff4184944c9a6372c2" exitCode=143 Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.228878 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69810bbd83feb8f3b2c6e3eacb37a8a88d800c348b589651f9955f7ebb58fcf" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.297621 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wrlsz_must-gather-84smp_ac374dbb-98a9-423f-8b7d-399e602c571a/copy/0.log" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.298069 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.379433 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsqw7\" (UniqueName: \"kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7\") pod \"ac374dbb-98a9-423f-8b7d-399e602c571a\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.379523 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output\") pod \"ac374dbb-98a9-423f-8b7d-399e602c571a\" (UID: \"ac374dbb-98a9-423f-8b7d-399e602c571a\") " Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.385231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7" (OuterVolumeSpecName: "kube-api-access-qsqw7") pod "ac374dbb-98a9-423f-8b7d-399e602c571a" (UID: "ac374dbb-98a9-423f-8b7d-399e602c571a"). InnerVolumeSpecName "kube-api-access-qsqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.481585 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsqw7\" (UniqueName: \"kubernetes.io/projected/ac374dbb-98a9-423f-8b7d-399e602c571a-kube-api-access-qsqw7\") on node \"crc\" DevicePath \"\"" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.501524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ac374dbb-98a9-423f-8b7d-399e602c571a" (UID: "ac374dbb-98a9-423f-8b7d-399e602c571a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:54:25 crc kubenswrapper[4821]: I0930 17:54:25.583918 4821 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac374dbb-98a9-423f-8b7d-399e602c571a-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 30 17:54:26 crc kubenswrapper[4821]: I0930 17:54:26.237357 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wrlsz/must-gather-84smp" Sep 30 17:54:26 crc kubenswrapper[4821]: I0930 17:54:26.717070 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac374dbb-98a9-423f-8b7d-399e602c571a" path="/var/lib/kubelet/pods/ac374dbb-98a9-423f-8b7d-399e602c571a/volumes" Sep 30 17:54:33 crc kubenswrapper[4821]: I0930 17:54:33.709158 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:54:33 crc kubenswrapper[4821]: E0930 17:54:33.709981 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:54:48 crc kubenswrapper[4821]: I0930 17:54:48.706688 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:54:48 crc kubenswrapper[4821]: E0930 17:54:48.708496 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:55:01 crc kubenswrapper[4821]: I0930 17:55:01.708190 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:55:01 crc kubenswrapper[4821]: E0930 17:55:01.708905 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:55:16 crc kubenswrapper[4821]: I0930 17:55:16.706967 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:55:16 crc kubenswrapper[4821]: E0930 17:55:16.707710 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:55:30 crc kubenswrapper[4821]: I0930 17:55:30.291497 4821 scope.go:117] "RemoveContainer" containerID="66df0333382032aadba8c45ad8c392b635acec370f762992d147a6283b88a1e9" Sep 30 17:55:30 crc kubenswrapper[4821]: I0930 17:55:30.379337 4821 scope.go:117] "RemoveContainer" containerID="8715d8b232a1f3fb65228a662d22c405a699d6a8f7190aff4184944c9a6372c2" Sep 30 17:55:31 crc kubenswrapper[4821]: I0930 17:55:31.707787 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:55:31 crc kubenswrapper[4821]: E0930 17:55:31.708277 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:55:42 crc kubenswrapper[4821]: I0930 17:55:42.707456 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:55:42 crc kubenswrapper[4821]: E0930 17:55:42.708747 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:55:54 crc kubenswrapper[4821]: I0930 17:55:54.714956 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:55:54 crc kubenswrapper[4821]: E0930 17:55:54.716176 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:56:07 crc kubenswrapper[4821]: I0930 17:56:07.707506 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:56:07 crc kubenswrapper[4821]: E0930 17:56:07.708140 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:56:20 crc kubenswrapper[4821]: I0930 17:56:20.707162 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:56:20 crc kubenswrapper[4821]: E0930 17:56:20.707747 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434" Sep 30 17:56:30 crc kubenswrapper[4821]: I0930 17:56:30.429666 4821 scope.go:117] "RemoveContainer" containerID="d833cbf49d7ae458ae0708213385fd8ddde07edcdcfcd03b184a28678ac07685" Sep 30 17:56:34 crc kubenswrapper[4821]: I0930 17:56:34.717452 4821 scope.go:117] "RemoveContainer" containerID="8949956a6c788e489e965cc971d76b0643c24aff4ee5c92bb208d99216988148" Sep 30 17:56:34 crc kubenswrapper[4821]: E0930 17:56:34.719135 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q2xpd_openshift-machine-config-operator(1c2ce348-eadc-4629-a03f-fb8924b5b434)\"" pod="openshift-machine-config-operator/machine-config-daemon-q2xpd" podUID="1c2ce348-eadc-4629-a03f-fb8924b5b434"